| Literature DB >> 22163758 |
Abstract
This paper presents a system of identifying individuals by their gait patterns. We take into account various distinguishable features that can be extracted from a user's gait and then divide them into two classes: walking pattern and stepping pattern. The conditions we assume are that our target environments are domestic areas, the number of users is smaller than 10, and all users ambulate with bare feet considering the everyday lifestyle of the Korean home. Under these conditions, we have developed a system that identifies individuals' gait patterns using our biometric sensor, UbiFloorII. We have created UbiFloorII to collect walking samples and created software modules to extract the user's gait pattern. To identify the users based on the gait patterns extracted from walking samples over UbiFloorII, we have deployed multilayer perceptron network, a feedforward artificial neural network model. The results show that both walking pattern and stepping pattern extracted from users' gait over the UbiFloorII are distinguishable enough to identify the users and that fusing two classifiers at the matching score level improves the recognition accuracy. Therefore, our proposed system may provide unobtrusive and automatic user identification methods in ubiquitous computing environments, particularly in domestic areas.Entities:
Keywords: UbiFloorII; gait recognition; stepping pattern; user identification; walking pattern
Mesh:
Year: 2011 PMID: 22163758 PMCID: PMC3231596 DOI: 10.3390/s110302611
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1.Examples of walking pattern in gait: (a) stride length, dynamic range, foot angle (b) stance time and swing time.
Figure 2.An example of stepping pattern in gait: ground reaction force.
Figure 3.Overall structure of UbiFloorII.
Figure 4.Reflective photo interrupters (left) and electric circuit (right).
Figure 5.A wooden tile composed of 64 photo interrupters (left) and UbiFloorII (right).
Figure 6.An example of searching footprints.
Figure 7.An example of a footprint model.
Figure 8.Walking feature extraction.
Figure 9.An array of transitional footprints.
Figure 10.An array of sampled transitional footprints.
Figure 11.Structure of the neural network for user identification.
Heights and foot sizes of the subjects.
| 1 | 168 | 255 |
| 2 | 173 | 260 |
| 3 | 165 | 260 |
| 4 | 172 | 275 |
| 5 | 168 | 265 |
| 6 | 180 | 280 |
| 7 | 173 | 255 |
| 8 | 175 | 260 |
| 9 | 180 | 265 |
| 10 | 168 | 260 |
Classifications of walking feature sets.
| 1 | 10 | |
| 2 | 10 | |
| 3 | 15 | |
| 4 | 20 | |
| 5 | 25 | |
Stepping feature sets in terms of used footsteps.
| 1 | STEP_1, STEP_3 | 2 |
| 2 | STEP_2, STEP_4 | 2 |
| 3 | STEP_1, STEP_2 | 2 |
| 4 | STEP_1, STEP_2, STEP_3, STEP_4 | 4 |
Various sampling times for transitional footprints.
| 0.04 | 20 |
| 0.05 | 16 |
| 0.06 | 13 |
| 0.08 | 10 |
| 0.10 | 8 |
Figure 12.Results of deciding the number of hidden nodes (left) and the epoch and goal (right) for walking pattern-based identification.
Comparison of recognition accuracy of user identification by walking pattern.
| 1 | 80.75 | |
| 2 | 89.05 | |
| 3 | 86.85 | |
| 4 | 96.20 | |
| 5 | 95.20 | |
Figure 13.Results of deciding the number of hidden nodes (left) and the epoch and goal (right) for stepping pattern-based identification.
Comparison of recognition accuracy (%) of user identification by stepping pattern.
| LEFT | 0.04 | 0.05 | 0.06 | 0.08 | 0.10 | |
|---|---|---|---|---|---|---|
| 58.6 | 89.0 | 89.2 | 84.4 | 82.7 | 84.5 | |
| 60.7 | 89.3 | 82.4 | 80.7 | 73.4 | 80.3 | |
| 60.0 | 83.9 | 84.9 | 84.8 | 82.5 | 79.0 | |
| 68.1 | 92.0 | 91.9 | 91.3 | 88.5 | 86.0 | |
Summary of classifier results (mean ± standard deviation). Comparison of recognition accuracy (%) of user identification by walking pattern.
| Multilayer Perceptron | 96.64 ± 0.38 |
| Instance-based Learning | 94.08 ± 0.47 |
| Decision Tree | 88.30 ± 0.83 |
| Bayes Net | 90.86 ± 1.28 |
| Decision Table | 73.92 ± 1.36 |
| Support Vector Machine | 95.88 ± 0.33 |
Aggregate confusion matrix for multilayer perceptron based on 10 times 10-fold cross-validation for 10 subjects’ walking pattern.
| 445 | 10 | 0 | 23 | 12 | 0 | 0 | 0 | 0 | 10 | User0 |
| 5 | 478 | 0 | 4 | 10 | 0 | 2 | 0 | 1 | 0 | User1 |
| 0 | 0 | 485 | 0 | 0 | 0 | 0 | 15 | 0 | 0 | User2 |
| 22 | 0 | 0 | 463 | 1 | 0 | 11 | 0 | 0 | 3 | User3 |
| 0 | 0 | 0 | 0 | 500 | 0 | 0 | 0 | 0 | 0 | User4 |
| 0 | 0 | 0 | 0 | 0 | 500 | 0 | 0 | 0 | 0 | User5 |
| 0 | 0 | 0 | 12 | 0 | 0 | 488 | 0 | 0 | 0 | User6 |
| 0 | 0 | 11 | 0 | 0 | 0 | 0 | 489 | 0 | 0 | User7 |
| 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 499 | 0 | User8 |
| 10 | 0 | 0 | 5 | 0 | 0 | 0 | 0 | 0 | 485 | User9 |
Summary of classifier results (mean ± standard deviation). Comparison of recognition accuracy (%) of user identification by stepping pattern.
| Multilayer Perceptron | 92.44 ± 0.28 |
| Instance-based Learning | 84.32 ± 0.48 |
| Decision Tree | 79.16 ± 1.55 |
| Bayes Net | 91.62 ± 0.37 |
| Decision Table | 57.38 ± 1.61 |
| Support Vector Machine | 95.61 ± 0.26 |
Aggregate confusion matrix for support vector machine based on 10 times 10-fold cross-validation for 10 subjects’ stepping pattern.
| 497 | 0 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | User0 |
| 0 | 486 | 0 | 14 | 0 | 0 | 0 | 0 | 0 | 0 | User1 |
| 11 | 0 | 487 | 0 | 0 | 0 | 0 | 10 | 0 | 0 | User2 |
| 0 | 12 | 0 | 454 | 18 | 16 | 0 | 0 | 0 | 0 | User3 |
| 0 | 0 | 0 | 10 | 466 | 24 | 0 | 0 | 0 | 0 | User4 |
| 0 | 0 | 0 | 0 | 52 | 448 | 0 | 0 | 0 | 0 | User5 |
| 0 | 0 | 0 | 0 | 4 | 0 | 496 | 0 | 0 | 0 | User6 |
| 0 | 0 | 15 | 0 | 0 | 0 | 0 | 485 | 0 | 0 | User7 |
| 0 | 11 | 17 | 0 | 0 | 1 | 0 | 0 | 471 | 0 | User8 |
| 0 | 0 | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 498 | User9 |
Figure 14.A flow chart of fusion at the matching score level for gait recognition.
Performance comparison of all floor-based systems.
| Kennedy, 1996, [ | Inked barefoot prints | Pressure areas on the soles of the feet | Physical matching | N/A | N/A |
| Addlesee, 1997, [ | Load cells, floor | GRF discrete signal over a footstep | HMM | 15 | 91.3 |
| Orr, 2000, [ | Load cells, floor | GRF profile features over a footstep | KNN | 15 | 93.0 |
| Nakajima, 2000, [ | Load cells, mat | Direction and position of the footprints | Distance function | 10 | 85.0 |
| Yun, 2003, [ | Switch sensors, mat | Foot centers over 5 consecutive footsteps | MLP | 10 | 92.8 |
| Jung, 2003, [ | Pressure sensor, mat | 2D COP trajectories over 2 consecutive footsteps, combine classifiers | HMM | 8 | 64.0 |
| Pirttikangas, 2003, [ | ElectroMechanical Film, floor | Prototype vector via codebook for profile features | HMM | 3 | 76.8 |
| Pirttikangas, 2003, [ | ElectroMechanical Film, floor | Prototype vector via codebook for profile features | LVQ | 11 | 78.0 |
| Jung, 2004, [ | Pressure sensor, mat | 2D COP trajectories over 2 consecutive footsteps, combine classifiers | HMM, NN | 11 | 79.6 |
| Suutala, 2004, [ | ElectroMechanical Film, floor | Features from spatial, frequency domain over a footstep | DSLVQ | 11 | 70.2 |
| Middleton, 2005, [ | Force Sensing Resistor, mat | Stride length, cadence, heel-to-toe ratio over 4 consecutive footsteps | N/A | 15 | 80.0 |
| Yun, 2005, [ | Photo interrupters, floor | Foot centers and heel-to-toe time over 5 consecutive footsteps | MLP | 10 | 96.2 |
| Suutala, 2005, [ | ElectroMechanical Film, floor | Features from spatial, frequency domain over a footsteps, combine different feature presentations for a footstep, and then combine on multiple footsteps | MLP, 1 footstep | 11 | 79.2 |
| Suutala, 2008, [ | Switch sensors, floor | Spatial, statistical, time-related features over a footsteps, stride length and cadence from multiple consecutive footsteps | GP, 1 footstep | 9 | 64.2 |
| Suutala, 2008, [ | ElectroMechanical Film, floor | Features from spatial, frequency domain over a footsteps, combine different feature presentations for a footstep, and then combine on multiple footsteps | MLP, 1 footstep | 10 | 63.3 |
| Yun, 2008, [ | Photo interrupters, floor | Array of sampled transitional footprints over 4 consecutive footsteps | MLP | 10 | 92.0 |
| Qian, 2008, [ | Force Sensing Resistor, floor | 1D pressure profile + 2D COP trajectory over a footstep, stride length, cadence, mean pressure of both footsteps | FLD | 10 | 94.2 |
| Qian, 2010, [ | Force Sensing Resistor, floor | 1D pressure profile + 2D COP trajectory over a footstep, stride length, mean pressure of both footsteps | FLD | 11 | 92.3 |
| The Proposed method | Photo interrupters, floor | Foot centers, heel-to-toe time, array of sampled transitional footprints over 5 consecutive footsteps, combine classifiers | MLP | 10 | 99.0 |