Literature DB >> 26259209

View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition.

Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi.   

Abstract

Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view transformation model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings.

Entities:  

Mesh:

Year:  2015        PMID: 26259209     DOI: 10.1109/TCYB.2015.2452577

Source DB:  PubMed          Journal:  IEEE Trans Cybern        ISSN: 2168-2267            Impact factor:   11.448


  3 in total

1.  A View Transformation Model Based on Sparse and Redundant Representation for Human Gait Recognition.

Authors:  Abbas Ghebleh; Mohsen Ebrahimi Moghaddam
Journal:  J Med Signals Sens       Date:  2020-07-03

2.  Free-view gait recognition.

Authors:  Yonghong Tian; Lan Wei; Shijian Lu; Tiejun Huang
Journal:  PLoS One       Date:  2019-04-16       Impact factor: 3.240

3.  Gait Recognition and Understanding Based on Hierarchical Temporal Memory Using 3D Gait Semantic Folding.

Authors:  Jian Luo; Tardi Tjahjadi
Journal:  Sensors (Basel)       Date:  2020-03-16       Impact factor: 3.576

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.