| Literature DB >> 35590881 |
Saad Minhas1, Zeba Khanam1, Shoaib Ehsan1, Klaus McDonald-Maier1, Aura Hernández-Sabaté2,3.
Abstract
Weather prediction from real-world images can be termed a complex task when targeting classification using neural networks. Moreover, the number of images throughout the available datasets can contain a huge amount of variance when comparing locations with the weather those images are representing. In this article, the capabilities of a custom built driver simulator are explored specifically to simulate a wide range of weather conditions. Moreover, the performance of a new synthetic dataset generated by the above simulator is also assessed. The results indicate that the use of synthetic datasets in conjunction with real-world datasets can increase the training efficiency of the CNNs by as much as 74%. The article paves a way forward to tackle the persistent problem of bias in vision-based datasets.Entities:
Keywords: advanced driver assistance systems; autonomous car; computer vision; dataset; deep learning; intelligent transportation systems; synthetic data; weather classification
Mesh:
Year: 2022 PMID: 35590881 PMCID: PMC9105758 DOI: 10.3390/s22093193
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Simulator.
Figure 2Virtual Interior.
Figure 3Virtual Car.
Figure 4Proposed Environment.
Figure 5Synthetic Weather Dataset.
No. of Training images (Our dataset) and Testing images (BDD) per class distribution.
| Class | Training | Testing |
|---|---|---|
| Clear | 9613 | 1764 |
| Cloudy | 38,949 | 1677 |
| Foggy | 29,914 | 5 |
| Rainy | 29,857 | 396 |
| Total | 108,333 | 3842 |
Figure 6BDD (Berkeley Deep Dive) Dataset.
Figure 7Pipeline: Step 1: Load the pretrained network, Step 2: Unfreeze the classification layers and add a softmax layer (4,1), Step 3: Train the weights of the classification layers with the synthetic dataset, Step 4: Test the network accuracy with a real time test dataset.
Results from CNN evaluations.
| Architecture | mAP | Trainable Parameter | Time (min) |
|---|---|---|---|
| AlexNet | 0.6856 ± 0.012 | 61M | 986 |
| VGGNET | 0.7334 ± 0.023 | 138M | 2930 |
| GoogleLeNet | 0.6034 ± 0.009 | 7M | 618 |
| ResNet50 | 0.6183 ± 0.025 | 26M | 1020 |
| ResNet101 | 0.63 ± 0.006 | 44M | 1242 |
Figure 8Accuracy variation over each epoch for (a) AlexNet, (b) VGG, and (c) GoogleLeNet models.
Figure 9Accuracy variation over each epoch for Residual Networks (a) ResNet50 and (b) ResNet101.