| Literature DB >> 30717199 |
Baoding Zhou1,2,3,4, Jun Yang5,6,7, Qingquan Li8,9,10.
Abstract
In the indoor environment, the activity of the pedestrian can reflect some semantic information. These activities can be used as the landmarks for indoor localization. In this paper, we propose a pedestrian activities recognition method based on a convolutional neural network. A new convolutional neural network has been designed to learn the proper features automatically. Experiments show that the proposed method achieves approximately 98% accuracy in about 2 s in identifying nine types of activities, including still, walk, upstairs, up elevator, up escalator, down elevator, down escalator, downstairs and turning. Moreover, we have built a pedestrian activity database, which contains more than 6 GB of data of accelerometers, magnetometers, gyroscopes and barometers collected with various types of smartphones. We will make it public to contribute to academic research.Entities:
Keywords: activity recognition; deep learning; indoor localization; smartphone
Year: 2019 PMID: 30717199 PMCID: PMC6387421 DOI: 10.3390/s19030621
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Types of activities to classify.
| No. | Activity | Definition |
|---|---|---|
| A1 | down elevator | Taking an elevator downward |
| A2 | down escalator | Taking an escalator downward |
| A3 | downstairs | Going down stairs |
| A4 | up elevator | Taking an elevator upward |
| A5 | up escalator | Taking an escalator upward |
| A6 | upstairs | Going up stairs |
| A7 | turning | Turn a corner |
| A8 | walking | The user walks naturally |
| A9 | still | The user remains static |
Types of activities to classify.
| Software & Hardware | Configuration |
|---|---|
| CPU | Intel Xeon(R) CPU E5-2690 V4 @2.60 GHz × 28 |
| Memory | 64 GB |
| Graphics card | GeForce GTX 1080Ti× 2 |
| CUDA | Cuda 8.0 |
| cuDNN | Cudnn 6.0 |
| GCC | Gcc 5.4.0 |
| Python | Python 2.7 |
| Tensorflow | Tensorflow 1.4.0 |
Figure 1The architecture of the proposed method.
The specific information of the participants.
| No. | Height (cm) | Gender | Age |
|---|---|---|---|
| 1 | 163 | female | 25 |
| 2 | 168 | male | 24 |
| 3 | 173 | male | 26 |
| 4 | 180 | male | 22 |
| 5 | 180 | male | 25 |
| 6 | 168 | male | 24 |
| 7 | 165 | female | 23 |
| 8 | 186 | male | 22 |
| 9 | 162 | female | 24 |
| 10 | 172 | male | 31 |
List of hyperparmeters for the proposed CNN. The value in bold is the best setting of each hyperparameter.
| Hyperparmeters | Description | Values |
|---|---|---|
|
| Number of convolutional layers | 2, 3, 4, |
|
| Filter size | 2, 5, |
|
| Number of feature maps | 60, 80, |
|
| Pooling size | 3, 4, |
|
| Learning rate | 0.0001, 0.0005, |
|
| Batch size | 16, 32, |
Figure 2The performance with different convolutional layers.
Figure 3The performance with different filter size.
Figure 4The performance with different feature map numbers.
Figure 5The performance with different pooling size.
Figure 6The performance with different learning rate.
Figure 7The performance with different batch size.
Figure 8The performance with different window size.
Figure 9The performance for each type of activities with the best configuration.
The confusion matrix of the proposed method
| Predicted | A1 | A2 | A3 | A4 | A5 | A6 | A7 | A8 | A9 | |
|---|---|---|---|---|---|---|---|---|---|---|
| Actual | ||||||||||
|
| 3294 | 18 | 2 | 58 | 3 | 4 | 0 | 0 | 2 | |
|
| 15 | 3352 | 0 | 1 | 66 | 0 | 0 | 0 | 0 | |
|
| 4 | 1 | 3435 | 5 | 0 | 9 | 5 | 6 | 1 | |
|
| 7 | 3 | 2 | 3398 | 16 | 1 | 0 | 1 | 2 | |
|
| 2 | 65 | 0 | 11 | 3345 | 1 | 0 | 0 | 1 | |
|
| 1 | 3 | 9 | 6 | 4 | 3281 | 12 | 9 | 5 | |
|
| 0 | 0 | 5 | 0 | 0 | 3 | 3372 | 5 | 0 | |
|
| 1 | 1 | 1 | 1 | 1 | 9 | 3 | 3353 | 1 | |
|
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3384 | |
Figure 10Performance comparison in each type of activity with other methods.
Figure 11Comparison of average F-measure of nine activities with other methods.