Literature DB >> 25423652

Gait-based person recognition using arbitrary view transformation model.

Daigo Muramatsu, Akira Shiraishi, Yasushi Makihara, Md Zasim Uddin, Yasushi Yagi.   

Abstract

Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.

Entities:  

Year:  2014        PMID: 25423652     DOI: 10.1109/TIP.2014.2371335

Source DB:  PubMed          Journal:  IEEE Trans Image Process        ISSN: 1057-7149            Impact factor:   10.856


  4 in total

1.  Learning Efficient Spatial-Temporal Gait Features with Deep Learning for Human Identification.

Authors:  Wu Liu; Cheng Zhang; Huadong Ma; Shuangqun Li
Journal:  Neuroinformatics       Date:  2018-10

2.  A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition.

Authors:  Abbas Ghebleh; Mohsen Ebrahimi Moghaddam
Journal:  J Med Signals Sens       Date:  2020-07-03

3.  Free-view gait recognition.

Authors:  Yonghong Tian; Lan Wei; Shijian Lu; Tiejun Huang
Journal:  PLoS One       Date:  2019-04-16       Impact factor: 3.240

4.  Gait Recognition and Understanding Based on Hierarchical Temporal Memory Using 3D Gait Semantic Folding.

Authors:  Jian Luo; Tardi Tjahjadi
Journal:  Sensors (Basel)       Date:  2020-03-16       Impact factor: 3.576

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.