| Literature DB >> 35591146 |
Danny Buchman1, Michail Drozdov2, Tomas Krilavičius1, Rytis Maskeliūnas1, Robertas Damaševičius1.
Abstract
Pedestrian occurrences in images and videos must be accurately recognized in a number of applications that may improve the quality of human life. Radar can be used to identify pedestrians. When distinct portions of an object move in front of a radar, micro-Doppler signals are produced that may be utilized to identify the object. Using a deep-learning network and time-frequency analysis, we offer a method for classifying pedestrians and animals based on their micro-Doppler radar signature features. Based on these signatures, we employed a convolutional neural network (CNN) to recognize pedestrians and animals. The proposed approach was evaluated on the MAFAT Radar Challenge dataset. Encouraging results were obtained, with an AUC (Area Under Curve) value of 0.95 on the public test set and over 0.85 on the final (private) test set. The proposed DNN architecture, in contrast to more common shallow CNN architectures, is one of the first attempts to use such an approach in the domain of radar data. The use of the synthetic radar data, which greatly improved the final result, is the other novel aspect of our work.Entities:
Keywords: animal recognition; deep learning; doppler radar; micro-Doppler signature; pedestrian recognition
Mesh:
Year: 2022 PMID: 35591146 PMCID: PMC9105660 DOI: 10.3390/s22093456
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Explanation of MAFAT data.
Figure 2Examples of spectograms.
Figure 3Basic process structure of the radar data classification model training.
Figure 4The architecture of the main classification model.
Figure 5The convolution residual block used in the main classification model.
Figure 6The convolution identity block used in the main classification model.
Figure 7The architecture of the secondary classification model.
Figure 8Test run of the learning rate schedule with five cycles produced by a software tool available from [73].
Figure 9Training performance: (a) loss; (b) ROC AUC; (c) accuracy.
Experimental results with different K-folds.
| K-Fold Experiments Results | |||
|---|---|---|---|
|
|
|
|
|
| Accuracy | 94.035 (±0.5777) | 89.708 (±0.5129) | 91.855 (±3.0449) |
| ROC AUC | 98.347 (±0.2502) | 97.181 (±0.1869) | 98.510 (±0.6938) |
| Loss | 0.1776 | 0.3074 | 0.2828 |