| Literature DB >> 35684686 |
Mohammad Abboush1, Daniel Bamal1, Christoph Knieke1, Andreas Rausch1.
Abstract
Hardware-in-the-Loop (HIL) has been recommended by ISO 26262 as an essential test bench for determining the safety and reliability characteristics of automotive software systems (ASSs). However, due to the complexity and the huge amount of data recorded by the HIL platform during the testing process, the conventional data analysis methods used for detecting and classifying faults based on the human expert are not realizable. Therefore, the development of effective means based on the historical data set is required to analyze the records of the testing process in an efficient manner. Even though data-driven fault diagnosis is superior to other approaches, selecting the appropriate technique from the wide range of Deep Learning (DL) techniques is challenging. Moreover, the training data containing the automotive faults are rare and considered highly confidential by the automotive industry. Using hybrid DL techniques, this study proposes a novel intelligent fault detection and classification (FDC) model to be utilized during the V-cycle development process, i.e., the system integration testing phase. To this end, an HIL-based real-time fault injection framework is used to generate faulty data without altering the original system model. In addition, a combination of the Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) is employed to build the model structure. In this study, eight types of sensor faults are considered to cover the most common potential faults in the signals of ASSs. As a case study, a gasoline engine system model is used to demonstrate the capabilities and advantages of the proposed method and to verify the performance of the model. The results prove that the proposed method shows better detection and classification performance compared to other standalone DL methods. Specifically, the overall detection accuracies of the proposed structure in terms of precision, recall and F1-score are 98.86%, 98.90% and 98.88%, respectively. For classification, the experimental results also demonstrate the superiority under unseen test data with an average accuracy of 98.8%.Entities:
Keywords: 1D-CNN; LSTM; automotive software systems; deep learning; fault detection and classification; hardware-in-the-loop (HIL); model-based testing phases; real-time fault injection
Year: 2022 PMID: 35684686 PMCID: PMC9185421 DOI: 10.3390/s22114066
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Internal structure of LSTM cell.
Figure 2Healthy signal and fault types: Stuck-at fault, Offset fault, Hard-over fault, Noise fault.
Figure 3Healthy signal and fault types: Gain fault, Drift fault, Delay fault, Spike fault.
Value of and for all fault types.
| Fault Type | ||
|---|---|---|
| Healthy Signal | 1 | 0 |
| Stuck-at Fault | 0 | 0 or 1, and it varies with time |
| Offset/Bias Fault | 1 | fixed constant value |
| Gain Fault | Greater than 1 | 0 |
| Noise Fault | 1 | random value |
| Hard-Over Fault | 0 | higher than maximum threshold |
| Spike Fault | 1 | value varies on time |
| Drift Fault | 1 | value increases on time |
| Delay Time Fault | 0 | last cycle value of |
Overview of the related work.
| Related Works | Application Domain | Tasks | Method | Collected Data Set | Fault Classes | Remarks |
|---|---|---|---|---|---|---|
| Safavi et al. [ | Autonomous vehicles | Fault detection, isolation, identification and prediction | Multiclass deep neural network (DNN) | Real autonomous driving data set | Four | High accuracy 99.84% Low fault coverage High data acquisition cost |
| Theissler [ | Analyze automotive test recordings | Anomaly detection | Ensemble classifier-based ML | Real driving data set | Two | Low accuracy 85% Low fault coverage High data acquisition cost |
| Jung [ | Combustion engine | Fault classification | Bayesian-based method | Real engine data set | Eight | Low accuracy 85% High fault coverage High data acquisition cost |
| Kaplan et al. [ | Electric vehicles (EVs) | Fault diagnosis | LSTM | Simulation and real data set | Four | High Accuracy 97% Low fault coverage Low data acquisition cost |
| Zhong et al. [ | Automotive engines | Simultaneous fault diagnosis | Probabilistic reject machine | Real data set | Ten | Low accuracy 88.74% Low fault coverage Low data acquisition cost |
| Biddle et al. [ | Autonomous vehicles | Fault detection, identification and prediction | SVM | Simulation data set | Five | High accuracy 97.01% Low fault coverage Low data acquisition cost |
| Garramiola et al. [ | Railway traction drives | Fault detection and isolation | Observer-based, frequency analysis and hardware redundancy | Real-time simulation data set | Two | Low accuracy Low fault coverage Low data acquisition cost |
| Raveendran et al. [ | Air brake system | Fault identification | Random forest | Real-time simulation data set | Two | Low accuracy 92% Low fault coverage Low data acquisition cost |
| Namburu et al. [ | Automotive engines | Fault diagnosis | Pattern recognition | Real-time simulation data set | Eight | High accuracy 99.25% High fault coverage Low data acquisition cost |
| Proposed Model | Automotive software systems development | Fault detection and classification | Hybrid CNN-LSTM | Real-time simulation data set | Eight | High accuracy 98.85% High fault coverage Low data acquisition cost |
Figure 4Hybrid DL-based FDC methodology.
Figure 5Deep neural network architecture of the proposed model.
The architecture parameters of the proposed model.
| Layer | Size and Parameters |
|---|---|
| CNN Layer 1 | Filters: 8, Kernel Size: 2, Activation: ReLU, Input Shape: [30 × 5], Output Shape: [30 × 8] |
| CNN Layer 2 | Filters: 8, Kernel Size: 2, Activation: ReLU, Output Shape: [30 × 8] |
| CNN Layer 3 | Filters: 8, Kernel Size: 2, Activation: ReLU, Output Shape: [30 × 8] |
| CNN Layer 4 | Filters: 8, Kernel Size: 2, Activation: ReLU, Output Shape: [30 × 8] |
| CNN Layer 5 | Filters: 8, Kernel Size: 2, Activation: ReLU, Output Shape: [30 × 8] |
| Batch Normalization Layer 1 | Output Shape: [30 × 8] |
| Max Pooling Layer | Pool Size: 2, Output Shape: [15 × 8] |
| LSTM Layer 1 | Neurons: 64, Activation: ReLU, Output Shape: [15 × 64] |
| LSTM Layer 2 | Neurons: 64, Activation: ReLU, Output Shape: [15 × 64] |
| LSTM Layer 3 | Neurons: 64, Activation: ReLU, Output Shape: [15 × 64] |
| LSTM Layer 4 | Neurons: 64, Activation: ReLU, Output Shape: 64 |
| Batch Normalization Layer 2 | Output Shape: 64 |
| Flatten Layer | Output Shape: 64 |
| Output Layer | Output Shape: 9 |
Figure 6System architecture of the case study with different test phases of V-model.
Figure 7Scheme of the complete HIL simulation system.
Configuration of fault injection with collected data samples.
| Fault Type | APP Value | RPM Value | Fault Duration | Data Samples |
|---|---|---|---|---|
| Gain | 10 | 5 | 0–300 | 59,945 |
| Offset | 5 | 300 | 0–300 | 63,910 |
| Noise | 1–100 | 0–8191 | 123–288 | 46,115 |
| Stuck-at | 0 | 0 | 0–300 | 59,830 |
| Drift | 0.1 | 10 | 17–86, 44–100, 122–300 | 60,195 |
| Hard-Over | 127 | 10,000 | 122–135, 184–225 | 34,800 |
| Spike | 1–100 | 700–8191 | 0–96, 103–293 | 57,805 |
| Delay | 10 | 3 | 0–96, 121–286 | 55,905 |
Figure 8Driving cycle in dSPACE ControlDesk.
Figure 9System behavior under fault conditions: (a) Gain, Offset, Noise, Stuck-at fault and Healthy signal. (b) Drift, Hard-over, Spike, Delay fault and Healthy signal.
Figure 10Flowchart of model implementation and optimization.
Range of hyperparameters used in optimization process.
| Hyperparameter | Range |
|---|---|
| CNN Layers | 0–5 |
| LSTM Layers | 0–5 |
| Dense Layers | 0–5 |
| Epochs | 50–900 |
| Max Pooling Layer | 0–1 |
| Drop Layer | 0–2 |
| Batch Normalization Layer | 0–2 |
| Batch Size | 64–150 |
| Learning Rate | 0.001–0.0001 |
Figure 11Hyperparameters optimization results. (a) Validation accuracy with different number of layers. (b) Validation accuracy with different number of epochs. (c) Validation accuracy with different number of dropout layers. (d) Validation accuracy with different number of batch normalization layers. (e) Validation accuracy with different number of learning rate. (f) Validation accuracy with different number of batch size.
Optimal Hyperparameters for CNN+LSTM Model.
| Hyperparameter | Value |
|---|---|
| CNN Layers | 5 |
| LSTM Layers | 4 |
| Dense Layers | 0 |
| Epochs | 850 |
| Max Pooling Layer | 1 |
| Drop Layer | 0 |
| Batch Normalization Layer | 2 |
| Batch Size | 64 |
| Learning Rate | 0.0005 |
Fault detection accuracies of the models.
| Model | Precision | Recall | F1-Score |
|---|---|---|---|
| CNN | 92.69% | 91.99% | 92.32% |
| LSTM | 95.98% | 94.87% | 95.36% |
| CNN-LSTM | 98.86% | 98.90% | 98.88% |
Figure 12Classification performance in terms of precision.
Figure 13Classification performance in terms of recall.
Figure 14Classification performance in terms of F1-score.
Figure 15Classification performance in terms of AUC–ROC curve.
Training and inference times of the models.
| Model | Training Time (s) | Inference Time (s) |
|---|---|---|
| CNN | 368.4 | 1.14 |
| LSTM | 4874.7 | 5.22 |
| CNN-LSTM | 9541.2 | 3.07 |