| Literature DB >> 32937977 |
Luca Ciampi1, Nicola Messina1, Fabrizio Falchi1, Claudio Gennaro1, Giuseppe Amato1.
Abstract
Pedestrian detection through Computer Vision is a building block for a multitude of applications. Recently, there has been an increasing interest in convolutional neural network-based architectures to execute such a task. One of these supervised networks' critical goals is to generalize the knowledge learned during the training phase to new scenarios with different characteristics. A suitably labeled dataset is essential to achieve this purpose. The main problem is that manually annotating a dataset usually requires a lot of human effort, and it is costly. To this end, we introduce ViPeD (Virtual Pedestrian Dataset), a new synthetically generated set of images collected with the highly photo-realistic graphical engine of the video game GTA V (Grand Theft Auto V), where annotations are automatically acquired. However, when training solely on the synthetic dataset, the model experiences a Synthetic2Real domain shift leading to a performance drop when applied to real-world images. To mitigate this gap, we propose two different domain adaptation techniques suitable for the pedestrian detection task, but possibly applicable to general object detection. Experiments show that the network trained with ViPeD can generalize over unseen real-world scenarios better than the detector trained over real-world data, exploiting the variety of our synthetic dataset. Furthermore, we demonstrate that with our domain adaptation techniques, we can reduce the Synthetic2Real domain shift, making the two domains closer and obtaining a performance improvement when testing the network over the real-world images.Entities:
Keywords: convolutional neural networks; deep learning; domain adaptation; pedestrian detection; synthetic datasets
Year: 2020 PMID: 32937977 PMCID: PMC7570533 DOI: 10.3390/s20185250
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1(a) Pedestrians in the JTA (Joint Track Auto) dataset with their skeletons. (b) Examples of annotations in the ViPeD (Virtual Pedestrian Dataset) dataset; original bounding boxes are in green, while the sanitized ones are in blue.
Figure 2Histogram of distances between pedestrians and cameras.
Figure 3Examples of images of the ViPeD dataset together with the sanitized bounding boxes.
Figure 4Overview of the first domain adaptation technique. In a first step, we train the detector using ViPeD, our synthetic collection of images. Then, in a second step, we fine-tune the network using real-world images.
Figure 5Overview of the second domain adaptation technique. We mitigate the Synthetic2Real domain shift in a single-step training procedure, employing mixed batches containing both synthetic and real images at the same time.
Evaluation of the generalization capabilities. The first section of the table reports results obtained training the detector with real-world data, while the latter is related to the model trained over synthetic images. ViPeD + Real refer to the mixed batch experiments with ViPeD and of COCOPersons. Results are evaluated using the COCO mAP. We report in bold the best results.
| Test Dataset | |||
|---|---|---|---|
| Training Dataset | MOT17Det | MOT19Det | CityPersons |
| COCO | 0.636 | 0.466 | 0.546 |
| MOT17Det | - | 0.605 |
|
| MOT19Det | 0.618 | - | 0.419 |
| CityPersons | 0.710 | 0.488 | - |
| ViPeD | 0.721 |
| 0.516 |
| ViPeD + Real |
| 0.582 | 0.546 |
Evaluation of the two Domain Adaptation (DA) techniques on the MOT17Det dataset. FT-DA (Fine Tuning DA) is the first proposed solution, while MB-DA (Mixed Batch DA) is the second one. Results are evaluated using the MOT mean average precision (mAP). We report in bold the best results.
| Method | MOT AP |
|---|---|
| YTLAB [ | 0.89 |
| KDNT [ | 0.89 |
| ViPeD FT-DA (our) |
|
| ViPeD MB-DA (our) | 0.87 |
| ZIZOM [ | 0.81 |
| SDP [ | 0.81 |
| FRCNN [ | 0.72 |
Evaluation of the two DA techniques on the MOT19Det dataset. FT-DA (Fine Tuning DA) is the first proposed solution, while MB-DA (Mixed Batch DA) is the second one. Results are evaluated using the MOT mAP. We report in bold the best results.
| Method | MOT AP |
|---|---|
| SRK_ODESA |
|
| CVPR19_det | 0.80 |
| Aaron | 0.79 |
| PSdetect19 | 0.74 |
| ViPeD FT-DA [ | 0.80 |
| ViPeD MB-DA [ | 0.80 |