Literature DB >> 20714014

Action recognition using mined hierarchical compound features.

Andrew Gilbert1, John Illingworth, Richard Bowden.   

Abstract

The field of Action Recognition has seen a large increase in activity in recent years. Much of the progress has been through incorporating ideas from single-frame object recognition and adapting them for temporal-based action recognition. Inspired by the success of interest points in the 2D spatial domain, their 3D (space-time) counterparts typically form the basic components used to describe actions, and in action recognition the features used are often engineered to fire sparsely. This is to ensure that the problem is tractable; however, this can sacrifice recognition accuracy as it cannot be assumed that the optimum features in terms of class discrimination are obtained from this approach. In contrast, we propose to initially use an overcomplete set of simple 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining. This allows large amounts of data to be searched for frequently reoccurring patterns of features. At each level of the hierarchy, the mined compound features become more complex, discriminative, and sparse. This results in fast, accurate recognition with real-time performance on high-resolution video. As the compound features are constructed and selected based upon their ability to discriminate, their speed and accuracy increase at each level of the hierarchy. The approach is tested on four state-of-the-art data sets, the popular KTH data set to provide a comparison with other state-of-the-art approaches, the Multi-KTH data set to illustrate performance at simultaneous multiaction classification, despite no explicit localization information provided during training. Finally, the recent Hollywood and Hollywood2 data sets provide challenging complex actions taken from commercial movie sequences. For all four data sets, the proposed hierarchical approach outperforms all other methods reported thus far in the literature and can achieve real-time operation.

Mesh:

Year:  2011        PMID: 20714014     DOI: 10.1109/TPAMI.2010.144

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  4 in total

1.  An information-rich sampling technique over spatio-temporal CNN for classification of human actions in videos.

Authors:  S H Shabbeer Basha; Viswanath Pulabaigari; Snehasis Mukherjee
Journal:  Multimed Tools Appl       Date:  2022-05-09       Impact factor: 2.577

2.  Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition.

Authors:  Na Shu; Zhiyong Gao; Xiangan Chen; Haihua Liu
Journal:  PLoS One       Date:  2015-07-01       Impact factor: 3.240

3.  Multiview Layer Fusion Model for Action Recognition Using RGBD Images.

Authors:  Pongsagorn Chalearnnetkul; Nikom Suvonvorn
Journal:  Comput Intell Neurosci       Date:  2018-06-20

4.  A generalized pyramid matching kernel for human action recognition in realistic videos.

Authors:  Jun Zhu; Quan Zhou; Weijia Zou; Rui Zhang; Wenjun Zhang
Journal:  Sensors (Basel)       Date:  2013-10-24       Impact factor: 3.576

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.