| Literature DB >> 36236306 |
Daniel Konings1, Fakhrul Alam1, Nathaniel Faulkner1, Calum de Jong1.
Abstract
In recent publications, capacitive sensing floors have been shown to be able to localize individuals in an unobtrusive manner. This paper demonstrates that it might be possible to utilize the walking characteristics extracted from a capacitive floor to recognize subject and gender. Several neural network-based machine learning techniques are developed for recognizing the gender and identity of a target. These algorithms were trained and validated using a dataset constructed from the information captured from 23 subjects while walking, alone, on the sensing floor. A deep neural network comprising a Bi-directional Long Short-Term Memory (BLSTM) provided the most accurate identity performance, classifying individuals with an accuracy of 98.12% on the test data. On the other hand, a Convolutional Neural Network (CNN) was the most accurate for gender recognition, attaining an accuracy of 93.3%. The neural network-based algorithms are benchmarked against Support Vector Machine (SVM), which is a classifier used in many reported works for floor-based recognition tasks. The majority of the neural networks outperform SVM across all accuracy metrics.Entities:
Keywords: biometrics; capacitive floor; gender classification; human sensing; machine learning; neural network
Mesh:
Year: 2022 PMID: 36236306 PMCID: PMC9571660 DOI: 10.3390/s22197206
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Floor-Based Privacy-Preserving Human Classification Approaches.
| Source | Measurement Method | Classification Goal | Accuracy | Algorithm | Participants |
|---|---|---|---|---|---|
| Proposed | Capacitive Flooring | Identity/Gender | 0.98/0.93 | BLSTM/CNN | 23 |
| Bales et al. [ | Underfloor Accelerometer | Gender | 0.88 | SVM | 15 |
| Anchal et al. [ | Geophone | Gender | 0.956 | SVM | 8 |
| Qian et al. [ | Pressure Sensors | Identity | 0.92 | Fisher LD | 11 |
| Mukhopadhyay et al. [ | Geophone | Identity | 0.92–0.98 | SVM | 8 |
| Clemente et al. [ | Seismometer | Identity | 0.97 | SVM | 6 |
| Miyoshi et al. [ | Microphone | Identity | 0.928 | GMM | 12 |
| Shi et al. [ | Capacitive (Triboelectric Sensor) | Identity | 0.96 | CNN | 10 |
| Shi et al. [ | Capacitive (Triboelectric Sensor) | Identity | 0.8567 | CNN | 20 |
| Li et al. [ | Capacitive (Triboelectric Sensor) | Identity | 0.976 | BLSTM | 8 |
| Pan et al. [ | Geophone | Identity | 0.9 | SVM | 10 |
Figure 1Loading mode capacitor formed by a subject’s foot.
Figure 2The structure of the capacitive sensing floor. Any non-conductive flooring can be laid on top of the sensing bed.
Figure 3System architecture showcasing the pc–board–panel connections.
Figure 4Eight sensing floor panels with carpet installed above forms the testbed the subjects walked on.
Figure 5Self-reported participant characteristics.
Neural Network Hyperparameter Values Used in the Final Tuned Identity Models.
| Algorithm | Hyperparameter Range | Final Hyperparameter Values |
|---|---|---|
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.0388 |
| Filter size: [2, 7] | Filter size: 5 (5 × 5) | |
| Pooling: [1, 5] | Pooling: 1 (No pooling) | |
| Number of filters: 2^[3, 9] | Number of filters: 256 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.081 |
| Neurons: 2^[4, 10] | Neurons: 1024 | |
| Dropout: [0, 0.4] | Dropout: 0.19 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.0013 |
| Dropout (for hidden layer 1): [0, 0.4] | Dropout (for hidden layer 1): 0.369 | |
| Dropout (for hidden layer 2): [0, 0.4] | Dropout (for hidden layer 2): 0.116 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.00192 |
| Dropout (for hidden layer 1): [0, 0.4] | Dropout (for hidden layer 1): 0.354 | |
| Dropout (for hidden layer 2): [0, 0.4] | Dropout (for hidden layer 2): 0.19 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.00054 |
| Dropout (for hidden layer 1): [0, 0.4] | Dropout (for hidden layer 1): 0.34 | |
| Dropout (for hidden layer 2): [0, 0.4] | Dropout (for hidden layer 2): 0.06 |
Neural Network Hyperparameter Values Used in the Final Tuned Gender Models.
| Algorithm | Hyperparameter Range | Final Hyperparameter Values |
|---|---|---|
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.0058 |
| Filter size: [2, 7] | Filter size: 5 (5 × 5) | |
| Pooling: [1, 5] | Pooling: 1 (No pooling) | |
| Number of filters: 2^[3, 9]Fully connected layer neurons: 2^[5, 10] | Number of filters: 128 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.008 |
| Neurons: 2^[4, 10] | Neurons: 512 | |
| Dropout: [0, 0.4] | Dropout: 0.047 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.03118 |
| Dropout (for hidden layer 1): [0, 0.4] | Dropout (for hidden layer 1): 0.27 | |
| Dropout (for hidden layer 2): [0, 0.4] | Dropout (for hidden layer 2): 0.05 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.00192 |
| Dropout (for hidden layer 1): [0, 0.4] | Dropout (for hidden layer 1): 0.38 | |
| Dropout (for hidden layer 2): [0, 0.4] | Dropout (for hidden layer 2): 0.18 | |
|
| Epsilon: [1 × 10−8, 1] | Epsilon: 0.00391 |
| Dropout (for hidden layer 1): [0, 0.4] | Dropout (for hidden layer 1): 0.36 | |
| Dropout (for hidden layer 2): [0, 0.4] | Dropout (for hidden layer 2): 0.12 |
Test Performance of the Final Tuned Identity Models.
| Algorithm | Precision | Recall | Accuracy | F1 Score | Total Learnable Parameters |
|---|---|---|---|---|---|
|
| 0.92065 | 0.942478 | 0.936 | 0.931436 | 57,378,071 |
|
| 0.89125 | 0.917696 | 0.913 | 0.90428 | 1,279,399 |
|
| 0.92165 | 0.944348 | 0.939 | 0.932861 | 2,253,591 |
|
| 0.9776 | 0.978696 | 0.9812 | 0.978148 | 11,120,023 |
|
| 0.88815 | 0.913348 | 0.906 | 0.900573 | 3,691,927 |
|
| 0.82615 | 0.863478 | 0.854 | 0.844402 | 129,930 × 200 * |
* Total number of Support Vectors used by the 23 binary learners trained for the SVM.
Test Performance of the Final Tuned Gender Models.
| Algorithm | Precision | Recall | Accuracy | F1 Score | Total Learnable Parameters |
|---|---|---|---|---|---|
|
| 0.9335 | 0.9315 | 0.933 | 0.932499 | 6,967,938 |
|
| 0.8885 | 0.8845 | 0.887 | 0.886495 | 892,306 |
|
| 0.8945 | 0.889 | 0.890 | 0.891742 | 5,152,962 |
|
| 0.848 | 0.8495 | 0.839 | 0.848749 | 620,674 |
|
| 0.89 | 0.8845 | 0.884 | 0.887241 | 3,813,202 |
|
| 0.855 | 0.85 | 0.85 | 0.852493 | 15,053 × 200 * |
* Total number of support vectors used by the single binary learner trained for the SVM.
Figure 6Best Performing Identity Classification Structure: BLSTM.
Figure 7Training Plot for Best Performing Identity Network.
Figure 8Best Performing Gender Classification Structure: CNN.
Figure 9Training Plots for Best Performing Gender Network.
Figure 10Identity Classification using Final BLSTM Model.
Figure 11Gender Classification using Final CNN Model.