Literature DB >> 23996589

Multilevel depth and image fusion for human activity detection.

Bingbing Ni, Yong Pei, Pierre Moulin, Shuicheng Yan.   

Abstract

Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

Entities:  

Mesh:

Year:  2013        PMID: 23996589     DOI: 10.1109/TCYB.2013.2276433

Source DB:  PubMed          Journal:  IEEE Trans Cybern        ISSN: 2168-2267            Impact factor:   11.448


  8 in total

1.  Self-organizing neural integration of pose-motion features for human action recognition.

Authors:  German I Parisi; Cornelius Weber; Stefan Wermter
Journal:  Front Neurorobot       Date:  2015-06-09       Impact factor: 2.650

2.  Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

Authors:  Jin Qi; Zhiyong Yang
Journal:  PLoS One       Date:  2014-12-04       Impact factor: 3.240

3.  Adaptive Local Spatiotemporal Features from RGB-D Data for One-Shot Learning Gesture Recognition.

Authors:  Jia Lin; Xiaogang Ruan; Naigong Yu; Yee-Hong Yang
Journal:  Sensors (Basel)       Date:  2016-12-17       Impact factor: 3.576

4.  A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data.

Authors:  Alessandro Manzi; Paolo Dario; Filippo Cavallo
Journal:  Sensors (Basel)       Date:  2017-05-11       Impact factor: 3.576

5.  Prediction of Human Activities Based on a New Structure of Skeleton Features and Deep Learning Model.

Authors:  Neziha Jaouedi; Francisco J Perales; José Maria Buades; Noureddine Boujnah; Med Salim Bouhlel
Journal:  Sensors (Basel)       Date:  2020-09-01       Impact factor: 3.576

6.  HIT HAR: Human Image Threshing Machine for Human Activity Recognition Using Deep Learning Models.

Authors:  Alwin Poulose; Jung Hwan Kim; Dong Seog Han
Journal:  Comput Intell Neurosci       Date:  2022-10-06

7.  Hierarchical Activity Recognition Using Smart Watches and RGB-Depth Cameras.

Authors:  Zhen Li; Zhiqiang Wei; Lei Huang; Shugang Zhang; Jie Nie
Journal:  Sensors (Basel)       Date:  2016-10-15       Impact factor: 3.576

8.  Human activity recognition in artificial intelligence framework: a narrative review.

Authors:  Neha Gupta; Suneet K Gupta; Rajesh K Pathak; Vanita Jain; Parisa Rashidi; Jasjit S Suri
Journal:  Artif Intell Rev       Date:  2022-01-18       Impact factor: 9.588

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.