| Literature DB >> 35808251 |
Sumeyra Tas1, Ozgen Sari1, Yaser Dalveren2, Senol Pazar3,4, Ali Kara5, Mohammad Derawi6.
Abstract
This study proposes a simple convolutional neural network (CNN)-based model for vehicle classification in low resolution surveillance images collected by a standard security camera installed distant from a traffic scene. In order to evaluate its effectiveness, the proposed model is tested on a new dataset containing tiny (100 × 100 pixels) and low resolution (96 dpi) vehicle images. The proposed model is then compared with well-known VGG16-based CNN models in terms of accuracy and complexity. Results indicate that although the well-known models provide higher accuracy, the proposed method offers an acceptable accuracy (92.9%) as well as a simple and lightweight solution for vehicle classification in low quality images. Thus, it is believed that this study might provide useful perception and understanding for further research on the use of standard low-cost cameras to enhance the ability of the intelligent systems such as intelligent transportation system applications.Entities:
Keywords: convolutional neural network; deep learning; low quality; low resolution; vehicle classification
Mesh:
Year: 2022 PMID: 35808251 PMCID: PMC9268885 DOI: 10.3390/s22134740
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Architecture of the proposed model.
Figure 2Architecture of the VGG16 pre-trained model.
Figure 3(a) Position of the camera placed on the minaret and (b) a view from the camera.
Figure 4Samples of vehicles: (a) bike, (b) car, (c) juggernaut, (d) minibus, (e) pickup, and (f) truck.
Figure 5The flowchart of data preprocessing.
The specifications of the server used in the study.
|
| Intel Core i7-7500U @3.5 GHz |
|
| NVIDIA GeForce 920M |
|
| 8 GB |
|
| Windows 10 (64 bits) |
Figure 6For the proposed model: (a) training and validation accuracy, and (b) training and validation loss.
Figure 7For the VGG16 pre-trained model: (a) training and validation accuracy, and (b) training and validation loss.
Figure 8For the VGG16 fine-tuning pre-trained model: (a) training and validation accuracy, and (b) Training and validation loss.
Comparison of the test accuracy and loss for the CNN-based models.
| CNN Models | Accuracy (%) | Loss (%) | # Layers | # Parameters | Training Time (Minutes) |
|---|---|---|---|---|---|
| Proposed Model | 92.9 | 30.3 | 9 | ~17 k | ~6 |
| VGG16 Pre-trained Model | 96 | 24.7 | 21 | ~15.3 M | ~28 |
| VGG16 Fine-tuning Pre-trained Model | 99.2 | 7.7 | 21 | ~15.3 M | ~15 |