Literature DB >> 31095476

NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding.

Jun Liu, Amir Shahroudy, Mauricio Perez, Gang Wang, Ling-Yu Duan, Alex C Kot.   

Abstract

Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding.

Entities:  

Mesh:

Year:  2019        PMID: 31095476     DOI: 10.1109/TPAMI.2019.2916873

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  18 in total

1.  C-MHAD: Continuous Multimodal Human Action Dataset of Simultaneous Video and Inertial Sensing.

Authors:  Haoran Wei; Pranav Chopada; Nasser Kehtarnavaz
Journal:  Sensors (Basel)       Date:  2020-05-20       Impact factor: 3.576

2.  GAS-GCN: Gated Action-Specific Graph Convolutional Networks for Skeleton-Based Action Recognition.

Authors:  Wensong Chan; Zhiqiang Tian; Yang Wu
Journal:  Sensors (Basel)       Date:  2020-06-21       Impact factor: 3.576

3.  Recognition of Rare Low-Moral Actions Using Depth Data.

Authors:  Kanghui Du; Thomas Kaczmarek; Dražen Brščić; Takayuki Kanda
Journal:  Sensors (Basel)       Date:  2020-05-12       Impact factor: 3.576

4.  Whole and Part Adaptive Fusion Graph Convolutional Networks for Skeleton-Based Action Recognition.

Authors:  Qi Zuo; Lian Zou; Cien Fan; Dongqian Li; Hao Jiang; Yifeng Liu
Journal:  Sensors (Basel)       Date:  2020-12-13       Impact factor: 3.576

5.  A Hierarchical Learning Approach for Human Action Recognition.

Authors:  Nicolas Lemieux; Rita Noumeir
Journal:  Sensors (Basel)       Date:  2020-09-01       Impact factor: 3.576

6.  Enhanced Spatial and Extended Temporal Graph Convolutional Network for Skeleton-Based Action Recognition.

Authors:  Fanjia Li; Juanjuan Li; Aichun Zhu; Yonggang Xu; Hongsheng Yin; Gang Hua
Journal:  Sensors (Basel)       Date:  2020-09-15       Impact factor: 3.576

7.  Activity Recognition for Ambient Assisted Living with Videos, Inertial Units and Ambient Sensors.

Authors:  Caetano Mazzoni Ranieri; Scott MacLeod; Mauro Dragone; Patricia Amancio Vargas; Roseli Aparecida Francelin Romero
Journal:  Sensors (Basel)       Date:  2021-01-24       Impact factor: 3.576

8.  Improved Action Recognition with Separable Spatio-Temporal Attention Using Alternative Skeletal and Video Pre-Processing.

Authors:  Pau Climent-Pérez; Francisco Florez-Revuelta
Journal:  Sensors (Basel)       Date:  2021-02-02       Impact factor: 3.576

9.  MSST-RT: Multi-Stream Spatial-Temporal Relative Transformer for Skeleton-Based Action Recognition.

Authors:  Yan Sun; Yixin Shen; Liyan Ma
Journal:  Sensors (Basel)       Date:  2021-08-07       Impact factor: 3.576

10.  TUHAD: Taekwondo Unit Technique Human Action Dataset with Key Frame-Based CNN Action Recognition.

Authors:  Jinkue Lee; Hoeryong Jung
Journal:  Sensors (Basel)       Date:  2020-08-28       Impact factor: 3.576

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.