| Literature DB >> 30990804 |
Yonghong Tian1,2, Lan Wei1, Shijian Lu3,2, Tiejun Huang1,2.
Abstract
Human gait has been shown to be an effective biometric measure for person identification at a distance. On the other hand, changes in the view angle pose a major challenge for gait recognition as human gait silhouettes are usually different from different view angles. Traditionally, such a multi-view gait recognition problem can be tackled by View Transformation Model (VTM) which transforms gait features from multiple gallery views to the probe view so as to evaluate the gait similarity. In the real-world environment, however, gait sequences may be captured from an uncontrolled scene and the view angle is often unknown, dynamically changing, or does not belong to any predefined views (thus VTM becomes inapplicable). To address this free-view gait recognition problem, we propose an innovative view-adaptive mapping (VAM) approach. The VAM employs a novel walking trajectory fitting (WTF) to estimate the view angles of a gait sequence, and a joint gait manifold (JGM) to find the optimal manifold between the probe data and relevant gallery data for gait similarity evaluation. Additionally, a RankSVM-based algorithm is developed to supplement the gallery data for subjects whose gallery features are only available in predefined views. Extensive experiments on both indoor and outdoor datasets demonstrate that the VAM outperforms several reference methods remarkably in free-view gait recognition.Entities:
Mesh:
Year: 2019 PMID: 30990804 PMCID: PMC6467377 DOI: 10.1371/journal.pone.0214389
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Within free-view scenes with weak control and definition, both probe and gallery gait sequences could be captured from arbitrary views.
For example, as shown in this figure, the two groups of samples from the PKU HumanID Gait Dataset [12] has low-quality gait features.
Fig 2The framework of our proposed view-adaptive mapping (VAM) technique for free-view gait recognition.
Fig 3Gait period analysis: (a) The gait silhouettes for the extracted figure-centric images of a walking person; (b) The estimated NACs; and (c) The first-order derivative curve of NACs.
Fig 4An example of gait silhouettes extracted from a real free-view scene: (a) A sample frame from the Camera WMHD in the PKU HumanID dataset, where three pedestrians are labeled using color bounding boxes (green for subject 0, yellow for subject 6 and blue for subject 8); (b), (c) and (d) The extracted gait silhouettes for the three subjects.
Fig 5Illustration of the WTF algorithm by using the man wearing a grey T-shirt: The blue line represents his walking trajectory in several gait cycles, while the green one denotes the fitted walking line in the gait period.
For better visualization, all his silhouette images in a gait period are manually superposed in one picture.
Fig 6Examples of the GEI features under different views in the CASIA dataset B.
Fig 7Illustration of joint gait manifold.
Fig 8Examples from the PKU dataset: The first row shows sample images with labelled pedestrians in cameras HD01, HD02-1, WMHD-1 and YTX-1, and the second row shows the corresponding pedestrian centroid trajectory.
A brief description of different gait sequences in the PKU dataset.
| Camera | Persons | Labeled subject | View |
|---|---|---|---|
| HD01 | 30 | 3, 6, 7, 12, 13 | back |
| HD02-1 | 33 | 1, 2, 3, 4, 6, 7, 8, 9, 12, 13 | back |
| HD02-2 | 65 | 1, 2, 3, 4, 6, 7, 8, 9, 12, 13, 14, 15, 16, 17, 18 | front |
| BWBQ | 51 | 5, 7, 9, 11, 12, 13, 14, 15, 16, 17, 18 | front |
| DCM | 87 | 1, 7, 11, 12, 13, 14, 15, 16, 17, 18 | front |
| WMHD-1 | 139 | 1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18 | front |
| WMHD-2 | 66 | 1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18 | back |
| YTX-1 | 150 | 1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18 | back |
| YTX-2 | 73 | 1, 2, 3, 4, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18 | front |
Fig 9Results of free-view gait recognition in the data missing variant of CASIA-B, where different proportions of the training and gallery data were randomly abandoned.
Fig 10Results of free-view gait recognition where only one view (i.e., marked in the x-coordinate) is kept while the probe view and 50% data of the other views are missing in the training and gallery sets of CASIA-B.
Fig 11Results of free-view gait recognition on the PKU dataset.
View estimation results (%) for each view on CASIA-B.
| Case | Method | 0° | 18° | 36° | 54° | 72° | 90° | 108° | 126° | 144° | 162° | 180° | AVG |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| NM | WTF | 99.0 | 98.6 | 91.0 | 96.3 | 87.0 | 89.0 | 91.0 | 98.9 | 82.3 | 100.0 | 100.0 | |
| GP | - | - | 84.0 | 91.2 | 85.3 | 74.0 | 86.0 | 91.2 | 93.5 | - | - | ||
| SVM | - | - | 94.9 | 40.5 | 85.4 | 64.3 | 24.0 | 43.6 | 98.0 | - | - | ||
| BG | WTF | 100.0 | 95.0 | 86.0 | 95.9 | 91.0 | 90.0 | 90.0 | 89.0 | 82.3 | 96.9 | 100.0 | |
| GP | - | - | 83.4 | 88.7 | 84.9 | 68.6 | 83.0 | 92.7 | 93.5 | - | - | ||
| SVM | - | - | 96.1 | 41.8 | 79.3 | 62.6 | 28.1 | 50.6 | 97.9 | - | - | ||
| CT | WTF | 100.0 | 96.0 | 88.0 | 90.1 | 87.1 | 85.7 | 87.6 | 98.0 | 87.8 | 99.0 | 98.0 | |
| GP | - | - | 84.0 | 91.2 | 85.3 | 74.0 | 86.0 | 91.2 | 93.5 | - | - | ||
| SVM | - | - | 93.7 | 50.0 | 81.0 | 61.2 | 22.5 | 41.5 | 96.6 | - | - |
View estimation results (%) for each camera on the PKU database.
| Method | HD01 | HD02-1&2 | BWBQ | DCM | WMHD-1&2 | YTX-1&2 | AVG |
|---|---|---|---|---|---|---|---|
| WTF | 100.0 | 81.8 | 81.3 | 96.8 | 88.9 | 81.5 | |
| GP | 50.0 | 45.5 | 62.5 | 29.0 | 37.0 | 51.8 | |
| SVM | 50.0 | 63.6 | 56.3 | 12.9 | 14.8 | 66.7 |
Fig 12Gait recognition results using JGM and VTM on the CASIA-B, where L-VTM and R-VTM are two implementation versions of VTM [9, 46] and JGM_n denotes the JGM with n reference views (n = 2, 4, 8).
Fig 13mAP in the gallery data supplementing experiment where data from a random view angle and its mirroring view are discarded for each subject.
Recovering error rates when different proportions of gallery data are missing.
| Method | Percent of missing data | ||
|---|---|---|---|
| 10% | 30% | 50% | |
| RankSVM | 0.026 | 0.029 | 0.039 |
| GKNN | 0.026 | 0.074 | 0.098 |
| KNN | 0.025 | 0.077 | 0.111 |