| Literature DB >> 29401637 |
Jose M Celaya-Padilla1, Carlos E Galván-Tejada2, F E López-Monteagudo3, O Alonso-González4, Arturo Moreno-Báez5, Antonio Martínez-Torteya6, Jorge I Galván-Tejada7, Jose G Arceo-Olague8, Huizilopoztli Luna-García9, Hamurabi Gamboa-Rosales10.
Abstract
Among the current challenges of the Smart City, traffic management and maintenance are of utmost importance. Road surface monitoring is currently performed by humans, but the road surface condition is one of the main indicators of road quality, and it may drastically affect fuel consumption and the safety of both drivers and pedestrians. Abnormalities in the road, such as manholes and potholes, can cause accidents when not identified by the drivers. Furthermore, human-induced abnormalities, such as speed bumps, could also cause accidents. In addition, while said obstacles ought to be signalized according to specific road regulation, they are not always correctly labeled. Therefore, we developed a novel method for the detection of road abnormalities (i.e., speed bumps). This method makes use of a gyro, an accelerometer, and a GPS sensor mounted in a car. After having the vehicle cruise through several streets, data is retrieved from the sensors. Then, using a cross-validation strategy, a genetic algorithm is used to find a logistic model that accurately detects road abnormalities. The proposed model had an accuracy of 0.9714 in a blind evaluation, with a false positive rate smaller than 0.018, and an area under the receiver operating characteristic curve of 0.9784. This methodology has the potential to detect speed bumps in quasi real-time conditions, and can be used to construct a real-time surface monitoring system.Entities:
Keywords: smart car; speed bump detection; surface monitoring
Year: 2018 PMID: 29401637 PMCID: PMC5856042 DOI: 10.3390/s18020443
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Flow chart of the proposed methodology. Several sensors connected to an IoT device (A,B); The real-time information of the sensors is trimmed intoltwo-second windows of data (C); Several statistical features that characterize the whole data are extracted from each data window (D); A machine learning approach based on genetic algorithms is applied to find a logistic model that can be used to accurately detect speed bumps (E).
Hardware specifications.
| Hardware | Description |
|---|---|
| Raspberry Pi 3 | Low cost ARM computer with a Quad Core 1.2GHz 64-bit CPU, 1 GB RAM, wireless LAN and Bluetooth, GPIO, and 4 USB 2.0 ports, power consumption: 800 mA |
| MPU6050 | This sensor includes a MEMS-accelerometer and a MEMS-gyro in a single chip. It includes 16-bit analog to digital conversion capabilities for each channel, capturing the |
| 7″ multi-touch screen | An 800 × 480 display that connects via the DSI port of the Raspberry Pi. It supports up to 10-finger touch, power consumption: 600 mA |
| BU-353-S4 | A SiRF Star IV powered GPS sensor with a 1 Hz. refresh rate, and a < 2.5 m. accuracy, power consumption: 55 mA. |
| TL-PB10400 | 10400 mAh external battery |
Figure 2Field experiment set up.
Extracted features.
| Feature | Formula |
|---|---|
| Mean (M1) | |
| Variance (M2) | |
| Skewness (M3) | |
| kurtosis (M4) | |
| Standard Deviation | |
| Max | |
| Dynamic range |
is the ith raw signal value within the two-second window being processed.
Figure 3Evolution of the accuracy.
Figure 4Rank stability in 1000 models. The y-positive axis shows the number of times each feature was included in a given model, the frequency ranking. The y-negative axis shows the color coded rank of each feature as each model was generated, for example, the sixth most frequent feature when all models had been generated (dark gray) was ranked lower (dark red) when only half of the models had been evolved. The x-axis shows the features ordered by rank. Starting color for each feature is assigned accordingly to the feature descending rank (from black down to white).
Figure 5Forward selection model.
Final model.
| Feature | Coefficient | Std. Error | ||
|---|---|---|---|---|
| Intercept | −8.066 | 1.254 | −6.433 | 1.250 × 10 |
| gxM3 | −1.131 | 4.370 × 10 | −2.589 | 9.633 × 10 |
| gxDR | 5.070 × 10 | 5.974 × 10 | 8.487 | < 2 × 10 |
| gyM3 | 2.500 | 7.024 × 10 | 3.560 | 3.710 × 10 |
| ayM4 | −7.382 × 10 | 2.335 × 10 | −3.162 | 1.569 × 10 |
gxM3 and gxDR are the skewness and dynamic range of the gyro sensor’s x-axis information, respectively; gyMR is the skewness of the gyro sensor’s y-axis information; and ayM4 is the kurtosis of the accelerometer’s y-axis information.
Figure 6ROC curve of the representative model, black line = train performance, red line = blind performance.
Figure 7Confusion matrix performance in the blind data set, light blue = true negative, orange = false negative, brown = false positive, dark blue = true positive.
Comparison of similar approaches.
| Author | Approach | Performance |
|---|---|---|
| Devapriya et al. [ | Computer vision | 30–92% TPR |
| Eriksson et al. [ | Accelerometer and GPS | 0.2% FPR |
| Mohan et al. [ | Accelerometer, microphone, GPS, and GSM antenna | 11.1% FPR and 22% FNR |
| Mednis et al. [ | Accelerometer | 90% TPR |
| Bhoraskar et al. [ | Accelerometer, magnetometer, and GPS | 10% FNR |
| Mohamed et al. [ | Accelerometer | 75.76–87.8% accuracy |
| Arroyo et al. [ | Accelerometer, GPS | 0.87 AUC, 0.91 recall |
| Aljaafreh et al. [ | Accelerometer, smart phone | N/A |
| Silva et al. [ | Accelerometer | 0.70–80% accuracy |
| Astarita et al. [ | Accelerometer, smart phone | 90% accuracy, 35% FP |
| González et al. [ | Accelerometer, gyro, smart phone | 0.82–0.944 AUC |
| Proposed approach | Accelerometer, gyro, and GPS | 97.14% accuracy, FPR < 0.018%, AUC of 0.9784 |
TPR, FPR, and FNR stand for true positive rate, false positive rate, and false negative rate, respectively.
Results of the partition bias analysis.
| k | Train Data Set | Blind Data Set |
|---|---|---|
| 1 | 0.9903 | 0.9784 |
| 2 | 0.9877 | 0.9954 |
| 3 | 0.9903 | 0.9968 |
| 4 | 0.9872 | 0.9777 |
| 5 | 0.9917 | 0.9859 |
| Average | 0.9894 | 0.9868 |
AUC values for the best performance model at each dataset, k ith repetition.
Results of the z-normalization analysis.
| k | Train Data Set | Blind Data Set |
|---|---|---|
| 1 | 0.9945 | 0.9731 |
| 2 | 0.9865 | 0.9806 |
| 3 | 0.9867 | 0.9761 |
| 4 | 0.9855 | 0.9845 |
| 5 | 0.9909 | 0.9924 |
| Average | 0.98882 | 0.98134 |
AUC values for the best performance model at each dataset, k ith repetition, using the Z normalized data sets.