| Literature DB >> 35845904 |
S Naveen Venkatesh1, G Chakrapani1, S Babudeva Senapti2, K Annamalai1, M Elangovan3, V Indira4, V Sugumaran1, Vetri Selvi Mahamuni5.
Abstract
Misfire detection in an internal combustion engine is an important activity. Any undetected misfire can lead to loss of fuel and power in the automobile. As the fuel cost is more, one cannot afford to waste money because of the misfire. Even if one is ready to spend more money on fuel, the power of the engine comes down; thereby, the vehicle performance falls drastically because of the misfire in IC engines. Hence, researchers paid a lot of attention to detect the misfire in IC engines and rectify it. Drawbacks of conventional diagnostic techniques include the requirement of high level of human intelligence and professional expertise in the field, which made the researchers look for intelligent and automatic diagnostic tools. There are many techniques suggested by researchers to detect the misfire in IC engines. This paper proposes the use of transfer learning technology to detect the misfire in the IC engine. First, the vibration signals were collected from the engine head and plots are made which will work as input to the deep learning algorithms. The deep learning algorithms have the capability to learn from the plots of vibration signals and classify the state of the misfire in the IC engines. In the present work, the pretrained networks such as AlexNet, VGG-16, GoogLeNet, and ResNet-50 are employed to identify the misfire state of the engine. In the pretrained networks, the effect of hyperparameters such as back size, solver, learning rate, and train-test split ratio was studied and the best performing network was suggested for misfire detection.Entities:
Mesh:
Year: 2022 PMID: 35845904 PMCID: PMC9287110 DOI: 10.1155/2022/7606896
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Stages of deep learning application in mechanical systems.
Related works on deep learning-based methods for mechanical systems.
| Reference | Deep learning technique | Mechanical system |
|
| ||
| [ | CNN with wavelet transform | Motor bearing |
| [ | Hierarchical CNN | Roller bearing |
| [ | CNN | |
| [ | Sparse autoencoder and deep belief network | |
| [ | Recurrent neural network | |
| [ | Stacked autoencoder | Gear box |
| [ | Generative adversarial network | |
| [ | CNN | Centrifugal pump |
Figure 2Overall methodology of fault diagnosis of misfire detection in IC engine.
Figure 3IC engine experimental setup.
IC engine specification.
| Parameter | Specification |
|
| |
| Manufacturer | Hindustan Motors |
| Fuel | Petrol (gasoline) |
| No. of cylinders | 4 |
| Alternator speed | 1500 rpm |
| Power | 7.35 kW |
| Bore diameter and stroke length | 88.90 mm × 73.02 mm |
| Cooling system | Water cooled |
Figure 4Sample vibration plots for (a) normal operation condition of IC engine and (b) misfire in cylinder 1.
Figure 5General architecture of convolutional neural networks.
Figure 6Overall workflow of misfire detection in IC engines using pretrained networks.
Figure 7Vibration plots for normal condition in IC engine and misfire in cylinder 1, cylinder 2, cylinder 3, and cylinder 4.
Characteristic features of adopted pretrained networks.
| Model/network | Number of layers | Learnable parameters (in millions) | Input size of the image |
|
| |||
| AlexNet | 8 | 60.0 | 227 × 227 |
| VGG16 | 16 | 137.0 | 224 × 224 |
| GoogLeNet | 22 | 7.1 | 224 × 224 |
| ResNet 50 | 50 | 25.7 | 224 × 224 |
Performance of pretrained models for various train-test split ratios
| Pretrained model | Classification accuracy for train-test split ratio (%) | Overall accuracy (%) | ||||
| 0.60 | 0.70 | 0.75 | 0.80 | 0.85 | ||
|
| ||||||
| AlexNet | 89.00 |
| 88.80 | 90.00 | 90.30 | 89.76 |
| VGG-16 | 95.00 | 92.70 | 96.80 | 97.00 |
|
|
| GoogLeNet | 92.00 | 88.70 | 85.60 |
| 92.00 | 90.86 |
| ResNet-50 | 84.00 | 86.70 |
| 89.00 | 84.00 | 87.46 |
The values highlighted in bold signify the best classification accuracy delivered by the pretrained model for the experimented train-test split ratio. While considering the last column, the best pretrained network with highest classification accuracy value is highlighted in bold.
Performance of pretrained models for various solvers.
| Pretrained model | Classification accuracy for different solvers (%) | Overall accuracy (%) | ||
| SGDM | Adam | RMSprop | ||
|
| ||||
| AlexNet |
| 88.70 | 80.00 | 87.29 |
| VGG-16 |
| 90.70 | 92.00 |
|
| GoogLeNet |
| 95.00 | 69.00 | 87.71 |
| ResNet-50 | 93.60 |
| 96.80 | 93.76 |
The values highlighted in bold signify the best classification accuracy delivered by the pretrained model for the experimented train-test split ratio. While considering the last column, the best pretrained network with highest classification accuracy value is highlighted in bold.
Performance of pretrained models for various learning rates.
| Pre-trained model | Classification accuracy for different learning rate (%) | Overall accuracy (%) | ||
| 0.0001 | 0.0003 | 0.001 | ||
|
| ||||
| AlexNet | 90.70 |
| 74.70 | 86.67 |
| VGG-16 | 97.30 |
| 81.30 | 92.81 |
| GoogLeNet | 96.00 | 96.00 |
| 94.42 |
| ResNet-50 |
| 96.00 | 94.40 |
|
Every pretrained network performs differently for the learning rates considered. The values highlighted in bold signify the best classification accuracies displayed by the pretrained networks for the variations in learning rate. The best performing network with highest overall classification accuracy is presented in the last column.
Performance of pretrained models for various batch sizes.
| Pre-trained model | Classification accuracy for different batch sizes (%) | Overall accuracy (%) | ||||
| 8 | 10 | 16 | 24 | 32 | ||
|
| ||||||
| AlexNet | 89.30 |
| 79.30 | 89.30 | 90.70 | 88.52 |
| VGG-16 | 98.40 |
| 97.70 | 97.30 | 98.30 |
|
| GoogLeNet | 98.00 |
| 91.00 | 95.00 | 94.00 | 95.20 |
| ResNet-50 | 96.00 |
| 83.20 | 93.60 | 97.40 | 93.56 |
Bolded values represent the best classification accuracy obtained by the individual pretrained networks for experimentation with different batch sizes. The pretrained network with best overall classification accuracy is represented in the final column.
Optimal hyperparameters for pretrained model.
| Pretrained model | Hyperparameters | |||
| Split ratio | Optimizer | Learning rate | Batch size | |
|
| ||||
| AlexNet | 0.70 | SGDM | 0.0003 | 10 |
| VGG-16 | 0.85 | SGDM | 0.0003 | 10 |
| GoogLeNet | 0.80 | SGDM | 0.001 | 10 |
| ResNet-50 | 0.75 | Adam | 0.0001 | 10 |
Performance comparison of pretrained models with optimal hyperparameters.
| Pretrained networks | AlexNet | VGG-16 | GoogLeNet | ResNet-50 |
|
| ||||
| Classification accuracy (%) | 94.00 |
| 98.10 | 97.60 |
The values in bold signify the highest classification accuracy obtained by the pretrained network among all other networks considered post optimal hyperparameter tuning.
Figure 8Training progress of VGG-16 network for misfire detection in IC engine.
Figure 9Confusion matrix of VGG-16 network for misfire detection in IC engine.
Performance comparison with other state-of-the-art methods.
| State-of-the-art methods | Classification accuracy (%) | References |
|
| ||
| KNN | 95.80 | [ |
| Decision tree | 80.60 | [ |
| LMT | 89.40 | [ |
| SVM | 91.20 | [ |
| K-star | 82.60 | [ |
| Proposed method |
| |
The proposed technique achieved higher classification accuracy (highlighted in bold) than other state of the art techniques presented in literature.