登录    注册    忘记密码

详细信息

Region-based Mixture Models for human action recognition in low-resolution videos  ( EI收录)  

文献类型:期刊文献

英文题名:Region-based Mixture Models for human action recognition in low-resolution videos

作者:Ying Zhao[1]; Huijun Di[1]; Jian Zhang[2]; Yao Lu[1]; Feng Lv[1]; Yufang Li[3]

第一作者:Ying Zhao

机构:[1] Beijing Lab. of Intell. Inf. Technol., Beijing Inst. of Technol., Beijing, China; [2] Adv. Analytics Inst., Univ. of Technol., Sydney, NSW, Australia; [3] Teachers Coll., Beijing Union Univ., Beijing, China

第一机构:Beijing Lab. of Intell. Inf. Technol., Beijing Inst. of Technol., Beijing, China

年份:0

卷号:247

起止页码:1-15

外文期刊名:Neurocomputing

收录:EI(收录号:16825901)

语种:英文

外文关键词:feature extraction - image classification - image motion analysis - image representation - image resolution - image sequences - mixture models - object tracking - shape recognition - video signal processing

摘要:State-of-the-art performance in human action recognition is achieved by the use of dense trajectories which are extracted by optical flow algorithms. However, optical flow algorithms are far from perfect in low-resolution (LR) videos. In addition, the spatial and temporal layout of features is a powerful cue for action discrimination. While, most existing methods encode the layout by previously segmenting body parts which is not feasible in LR videos. Addressing the problems, we adopt the Layered Elastic Motion Tracking (LEMT) method to extract a set of long-term motion trajectories and a long-term common shape from each video sequence, where the extracted trajectories are much denser than those of sparse interest points (SIPs); then we present a hybrid feature representation to integrate both of the shape and motion features; and finally we propose a Region-based Mixture Model (RMM) to be utilized for action classification. The RMM encodes the spatial layout of features without any needs of body parts segmentation. Experimental results show that the approach is effective and, more importantly, the approach is more general for LR recognition tasks. [All rights reserved Elsevier].

参考文献:

正在载入数据...

版权所有©北京联合大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心