| Literature DB >> 27886151 |
Jinhee Park1, Rios Jesus Javier2, Taesup Moon3, Youngwook Kim4.
Abstract
Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study. Then, we show how we can effectively achieve high classification accuracy by applying deep convolutional neural networks (DCNN) directly to the spectrogram of real measurement data. From the five-fold cross-validation on our dataset, which consists of five aquatic activities, we report that the conventional feature-based scheme only achieves an accuracy of 45.1%. In contrast, the DCNN trained using only the collected data attains 66.7%, and the transfer learned DCNN, which takes a DCNN pre-trained on a RGB image dataset and fine-tunes the parameters using the collected data, achieves a much higher 80.3%, which is a significant performance boost.Entities:
Keywords: aquatic activity classification; convolutional neural networks; micro-Doppler signatures; radar; transfer learning
Mesh:
Year: 2016 PMID: 27886151 PMCID: PMC5190971 DOI: 10.3390/s16121990
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Modeling of arm motions of (a) freestyle and (b) backstroke.
Figure 2Simulated spectrograms of (a) freestyle and (b) backstroke.
Figure 3Example pictures and spectrograms of (a) Freestyle; (b) Backstroke; (c) Breaststroke; (d) A swimming person pulling a floating boat; and (e) Rowing.
Figure 4The architectures of the DCNN learned from scratch. (a) DCNN-Scratch-I and (b) DCNN-Scratch-II.
Figure 5The architectures of DCNN used for transfer learning. (a) AlexNet and (b) VGG16.
Five-fold cross validation results for the compared schemes.
| Fold 1 | Fold 2 | Fold 3 | Fold 4 | Fold 5 | Average | |
|---|---|---|---|---|---|---|
| 27.2% | 36.0% | 56.8% | 70.4% | 35.2% | ||
| 64.3% | 49.8% | 92.5% | 49.8% | 53.2% | ||
| 68.0% | 51.2% | 98.4% | 63.2% | 52.8% | ||
| 60.0% | 79.2% | 93.6% | 69.6% | 70.4% | ||
| 70.4% | 72.0% | 99.2% | 82.4% | 77.6% |
Figure 6Average learning curves of DCNN-Scratch-I and DCNN-TL-VGG16. The horizontal axis stands for the number of mini-batch SGD update iterations.
Figure 7Visualization of feature maps. (a) Raw spectrogram for “freestyle”; (b) Visualization of 5 feature maps of DCNN-TL-VGG16 after the first convolution layers.