| Literature DB >> 35214336 |
Heshan Padmasiri1, Jithmi Shashirangana1, Dulani Meedeniya1, Omer Rana2, Charith Perera2.
Abstract
The incorporation of deep-learning techniques in embedded systems has enhanced the capabilities of edge computing to a great extent. However, most of these solutions rely on high-end hardware and often require a high processing capacity, which cannot be achieved with resource-constrained edge computing. This study presents a novel approach and a proof of concept for a hardware-efficient automated license plate recognition system for a constrained environment with limited resources. The proposed solution is purely implemented for low-resource edge devices and performed well for extreme illumination changes such as day and nighttime. The generalisability of the proposed models has been achieved using a novel set of neural networks for different hardware configurations based on the computational capabilities and low cost. The accuracy, energy efficiency, communication, and computational latency of the proposed models are validated using different license plate datasets in the daytime and nighttime and in real time. Meanwhile, the results obtained from the proposed study have shown competitive performance to the state-of-the-art server-grade hardware solutions as well.Entities:
Keywords: edge computing; energy efficiency; low cost; night vision; resource-constrained devices
Mesh:
Year: 2022 PMID: 35214336 PMCID: PMC8880701 DOI: 10.3390/s22041434
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Summary of the related LP recognition studies on edge platforms.
| Related Study | Description | Techniques | Type (D/N/S) | Performance |
|---|---|---|---|---|
| [ | Use a NVIDIA Jetson TX1 embedded board with GPU. Provides LP recognition without a detection line. Not robust to broken or reflective plates. | AlexNet (CNN) | D | AC = 95.25% |
| [ | Real-time LP recognition on an embedded DSP platform, Operation under daytime condition with sufficient daylight or artificial light from street lamps, High performance with low image resolution. | SVM | D | F = 86% |
| [ | Real-time LP recognition on GPU powered mobile platform by simplifying a trained neural network developed for desktop/ server environment. | CNN | D, N, S | AC = 94% |
| [ | Implemented in a Raspberry Pi3 with a Pi NoIR v2 camera module. Robust to angle, lighting and noise variations, Free from character segmentation to reduce errors in character mis-segmentation. | CNN | D, S | AC = 97% |
| [ | A portable ALPR model trained on a desktop computer and exported to an Android mobile device. | CNN | D | AC = 77.2% |
Comparison of studies with synthetic and nighttime images.
| Study | NT | Syn. | Synthesised Method | Performance |
|---|---|---|---|---|
| [ | ✓ | GAN-based | AC = 84.57% | |
| [ | ✓ | GAN-based | AC = 91.5% | |
| [ | ✓ | Augmentation (rotation, size and noise) | AC = 62.47% | |
| [ | ✓ | Augmentation, superimposition, GAN-based | AP = 99.32% | |
| [ | ✓ | Illumination and pose conditions | R = 93% | |
| [ | ✓ | Random modifications (colour, blur, noise) | AC = 99.98% | |
| [ | ✓ | ✓ | Random modifications (colour, depth) | AC = 85.3% |
| [ | ✓ | ✓ | Intensity changes | FN = 1.5% |
| [ | ✓ | ✓ | Illumination and pose conditions | AC = 94% |
| [ | ✓ | AC = 96% | ||
| [ | ✓ | AC = 93% | ||
| [ | ✓ | AP = 95.5% | ||
| [ | ✓ | F = 98.32% | ||
| [ | ✓ | AC = 95.7% | ||
| [ | ✓ | AC = 93.99% | ||
| [ | ✓ | AC = 92.6% | ||
| [ | ✓ | AC = 86% | ||
| [ | ✓ | AC = 96.2% |
Figure 1Overview of the proposed model.
Figure 2Hardware stack of the proposed solution.
Figure 3Two-stage license plate recognition pipeline.
Hardware tier details.
| Hardware Tier | Specification | Cost (as of January-2022) |
|---|---|---|
| Low-tier | Raspberry Pi Zero | USD 10.60 |
| Mid-tier | Raspberry Pi 3 B+ | USD 38.63 |
| High-tier | Raspberry Pi 3b+, Intel Neural Compute Stick 2 | USD 38.63 + USD 89.00 |
Figure 4High-tier model (left): Internal view, (right): Exterior deployment view.
Figure 5Circuit diagram of the design.
Figure 6Data flow of the proposed system.
Figure 7Pix2Pix for nighttime image generation.
Figure 8Stochastic super-network (left): PC-DARTS, (right): FB-Net.
Figure 9Model Architectures (left): hardware-optimized detection, (middle): hardware-agnostic detection, (right): recognition subnetworks.
Detailed summary of the data set.
| Data Set | CCPD Day | Synthesised Nighttime | Sri Lankan LP | Sri Lankan LP |
|---|---|---|---|---|
| Sample |
|
|
|
|
| No. of | 200,000 | 200,000 | 100 | 100 |
Performance results of the detection model.
| Model | Resource | Performance Measure | ||||
|---|---|---|---|---|---|---|
| Name | Requirement | Latency | Model | AP | AP | AP |
| s1_h | Raspberry Pi 3b+, Intel® NCS2 | 0.012 | 0.7776 | 0.9284 | 0.8451 | 0.85 |
| s1_h_h | Raspberry Pi 3b+, Intel® NCS2 | 0.011 | 0.8707 | 0.9299 | 0.8401 | 0.9 |
| s1_m | Raspberry Pi 3b+ | 0.157 | 0.6869 | 0.9005 | 0.7982 | 0.85 |
| s1_m_h | Raspberry Pi 3b+ | 0.004 | 0.6830 | 0.9029 | 0.7962 | 1.0 |
| s1_l | Raspberry Pi Zero | 4.54 | 0.5568 | 0.8422 | 0.7146 | 0.95 |
| s1_l_h | Raspberry Pi Zero | 4.08 | 0.5625 | 0.8327 | 0.6987 | 0.95 |
Performance results of the recognition model.
| Model | Resource | Performance Measure | ||||
|---|---|---|---|---|---|---|
| Name | Requirement | Latency | Model | Accuracy | Accuracy | Accuracy |
| s2_h | Raspberry Pi 3b+, Intel® NCS2 | 0.021 | 4.5 | 0.9987 | 0.9476 | 0.9873 |
| s2_m | Raspberry Pi 3b+ | 0.148 | 11.7 | 0.9877 | 0.9382 | 0.9882 |
| s2_l | Raspberry Pi Zero | 6.2 | 4.5 | 0.9565 | 0.9054 | 0.9586 |
Figure 10Model accuracy on the synthetically generated dataset (left): detection, (right) recognition.
Figure 11Camera positions (left) and sample deployed image (right).
Model performance with respect to the camera position (Number of correctly identified images).
| Experiment | Number of Images | Number of Correct Images | Camera Position | ||
|---|---|---|---|---|---|
| Low-Tier | Mid-Tier | High-Tier | |||
| 1 | 27 | 25 | 26 | 26 | 1 |
| 2 | 35 | 30 | 31 | 34 | 1 |
| 3 | 33 | 30 | 31 | 33 | 2 |
| 4 | 29 | 24 | 25 | 28 | 2 |
| 5 | 25 | 21 | 23 | 25 | 3 |
| 6 | 28 | 22 | 25 | 27 | 3 |
| 7 | 30 | 25 | 26 | 28 | 4 |
| 8 | 26 | 19 | 23 | 25 | 4 |
Figure 12Model accuracies of each experiment.
Hardware performance of each configuration.
| Hardware Tier | Power Consumption (W) | Average Battery Life (h) |
|---|---|---|
| Low-tier | 0.8 | 132.15 |
| Mid-tier | 5.15 | 11.03 |
| High-tier | 6.2 | 13.04 |
Comparison with the related studies.
| Study | Dataset | Resource | Accuracy | Latency |
|---|---|---|---|---|
| Lee | Nearly 500 images | NVIDIA Jetson TX1 | 95.24% | N/A |
| Arth | Test set 1: 260 images | Single Texas Instruments | 96% | 0.05211 s |
| Rezvi | Italian rear LP with | Quadro K2200, Jetson | Det: 61%, | Det: 0.026 s, |
| Izidio | Custom dataset | Raspberry Pi3 (ARM | Det: 99.37%, | 4.88 s |
| Proposed high-tier solution | CCPD (200,000 images), | Raspberry Pi 3B+, | Det: 90%, | Det: 0.011 s |