| Literature DB >> 35334315 |
Muhammad Ahmad1, Saima Sadiq2, Ala' Abdulmajid Eshmawi3, Ala Saleh Alluhaidan4, Muhammad Umer5, Saleem Ullah6, Michele Nappi7.
Abstract
The disease known as COVID-19 has turned into a pandemic and spread all over the world. The fourth industrial revolution known as Industry 4.0 includes digitization, the Internet of Things, and artificial intelligence. Industry 4.0 has the potential to fulfil customized requirements during the COVID-19 emergency crises. The development of a prediction framework can help health authorities to react appropriately and rapidly. Clinical imaging like X-rays and computed tomography (CT) can play a significant part in the early diagnosis of COVID-19 patients that will help with appropriate treatment. The X-ray images could help in developing an automated system for the rapid identification of COVID-19 patients. This study makes use of a deep convolutional neural network (CNN) to extract significant features and discriminate X-ray images of infected patients from non-infected ones. Multiple image processing techniques are used to extract a region of interest (ROI) from the entire X-ray image. The ImageDataGenerator class is used to overcome the small dataset size and generate ten thousand augmented images. The performance of the proposed approach has been compared with state-of-the-art VGG16, AlexNet, and InceptionV3 models. Results demonstrate that the proposed CNN model outperforms other baseline models with high accuracy values: 97.68% for two classes, 89.85% for three classes, and 84.76% for four classes. This system allows COVID-19 patients to be processed by an automated screening system with minimal human contact.Entities:
Keywords: COVID-19; Computer vision; Convolutional neural network (CNN); Human-machine system; Industry 4.0
Mesh:
Year: 2022 PMID: 35334315 PMCID: PMC8935962 DOI: 10.1016/j.compbiomed.2022.105418
Source DB: PubMed Journal: Comput Biol Med ISSN: 0010-4825 Impact factor: 6.698
Fig. 1The architecture of the Proposed framework.
Industry 4.0 applications amid COVID-19 pandemic era.
| Layer | Industry 4.0 Technology | COVID-19 Applications |
|---|---|---|
| Perception Layer | Advanced Manufacturing [ | Wearable Sensors can be applied to monitor COVID-19 symptoms like temperature and blood oxygen level. |
| Additive Manufacturing [ | 3D printing and 3D scanning can help in manufacturing critical parts and robotic mapping. | |
| Virtual and Augmented reality [ | Virtual devices help people to work together. Instructions can be provided in the real environment. | |
| Transportation Layer | Internet and Cloud [ | IoT can be used in combination with drones for monitoring. Patients can be monitored remotely. |
| Cybersecurity [ | Companies can improve cyber security at each level to ensure security and support work from home. | |
| Big Data Analysis [ | The data captured include information based on real-time as well as patient records. Big data can help in forecasting the effect of the virus on society, collecting real-time data and providing this data to management and authorities to make a strategic plan in crisis. | |
| Application Layer | Advanced Solutions [ | Robots can perform repetitive tasks. Chatbots can be used to answer public questions. |
| Simulation [ | AI-enabled platforms allow users to simulate real situations for analysis. Virtual reality reduces cost and helps in communication and collaboration. |
Fig. 2COVID-19 patients' X-rays.
Parameters used for ‘ImageDataGenerator’ to augment images.
| Parameter | Value |
|---|---|
| rotation_range | 30 |
| zoom_range | 0.15 |
| width_shift_range | 0.2 |
| height_shift_range | 0.2 |
| shear_range | 0.15 |
| horizontal_flip | True |
| fill_mode | nearest |
Fig. 3Image preprocessing steps followed in the experiments. (a) Image size is reduced, (b) Kernel is applied for edge detection, (c) RGB image is converted to YUV to get Y0, and (d) YUV image is converted back to RGB.
Fig. 4The architecture of the Proposed CNN Model for COVID-19 patient classification.
Detail of the layers structure used in the proposed CNN model.
| Name | Description |
|---|---|
| Convolution | Filters=(3 × 3, |
| Convolution | Filters=(3 × 3, |
| Max pooling | Pool_size=(2 × 2),Strides=(2 × 2) |
| Convolution | Filters=(2 × 2, |
| Average pooling | Pool_size=(3 × 3), Strides=(1 × 1) |
| Layer | Flatten() |
| Fully connected | Dense (120 neurons) |
| Fully connected | Dense (60 neurons) |
| Fully connected | Dense (10 neurons) |
| Sigmoid | Sigmoid (2-class) |
Result Comparison of Proposed CNN with Transfer learning models with 2 classes.
| Model | Accuracy | Precision | Sensitivity | F-score | Specificity | AUC |
|---|---|---|---|---|---|---|
| VGG16 | 97.21% | 98.64% | 96.47% | 97.03% | 96.83% | 98.90% |
| AlexNet | 67.76% | 69.25% | 90.43% | 81.83% | 54.80% | 68.47% |
| InceptionV3 | 96.42% | 97.88% | 97.14% | 97.33% | 94.19% | 96.23% |
| Proposed CNN | 97.68% | 99.21% | 98.74% | 98.98% | 96.22% | 99.17% |
Result Comparison of Proposed CNN with Transfer learning models with 3 classes.
| Model | Accuracy | Precision | Sensitivity | F-score | Specificity | AUC |
|---|---|---|---|---|---|---|
| VGG16 | 89.85% | 91.41% | 97.87% | 95.51% | 92.42% | 55.41% |
| AlexNet | 89.85% | 91.41% | 94.53% | 95.51% | 92.42% | 57.37% |
| InceptionV3 | 84.20% | 91.32% | 96.24% | 94.18% | 90.79% | 59.16% |
| Proposed CNN | 89.85% | 91.41% | 98.32% | 95.51% | 94.60% | 55.41% |
Result Comparison of Proposed CNN with Transfer learning models with 4 classes.
| Model | Accuracy | Precision | Sensitivity | F-score | Specificity | AUC |
|---|---|---|---|---|---|---|
| VGG16 | 82.88% | 89.12% | 97.42% | 92.40% | 90.11% | 57.28% |
| AlexNet | 83.71% | 89.12% | 96.45% | 92.40% | 91.77% | 57.18% |
| InceptionV3 | 84.44% | 89.22% | 98.99% | 93.10% | 92.29% | 58.28% |
| Proposed CNN | 84.76% | 89.29% | 98.99% | 93.89% | 92.19% | 59.48% |
Fig. 5Result comparison of the Proposed Model with Transfer Learning Models.
Summary of Deep learning models from the literature used for COVID-19 classification on similar dataset.
| Selected Work | Model | Number of Images | Data source | Application |
|---|---|---|---|---|
| Ozturk et al. [ | DarkNet | 1125, 11.1% SARS-CoV-2, 44.4% Pneumonia, 44.4% No finding | [ | Binary Classification (SARS-CoV-2, No finding) and Multiclass Classification (SARS-CoV-2, No finding, Pneumonia |
| Abbas et al. [ | CNN | 1764 | [ | Detection of SARS-CoV-2 infection |
| Das et al. [ | Xception | 1125, 11.1% SARS-CoV-2, 44.4% Pneumonia, 44.4% No finding | [ | Automatic detection of COVID-19 infection |
| Wang et al. [ | DNN | 13800, 2.56% COVID-19, 58% Pneumonia, 40% healthy % | [ | Classification of the lung into three categories: No infection, SARS-CoV-2-Viral/bacterial infection |
| Panwar et al. [ | nCOVnet | 337, 192 COVID Positive | [ | Detection of SARS-CoV-2 infection |
| Apostolopoulos et al. [ | VGG-19 | 224 Covid-19, 504 healthy instances, 400 bacteria and 314 viral | [ | Automatic detection of COVID-19 disease |
| Marques et al. [ | EfficientNet | 404 Normal, Pneumonia and COVID-19 | [ | Binary classification (COVID-19, normal patients) and multi-class (COVID-19, pneumonia, normal patients) |
Result Comparison of Proposed CNN with Deep learning models from the literature on COVID-19 image dataset [55].
| Reference | Accuracy | Sensitivity | Specificity | Precision | F-Score | AUC |
|---|---|---|---|---|---|---|
| DarkNet [ | 98.08% | 95.13% | 95.30% | 98.03% | 96.51% | – |
| CNN [ | – | 97.91% | 91.87% | – | – | 93% |
| Xception [ | 97.40% | 97.09% | 97.29% | – | 96.96% | – |
| nCOVnet [ | 88.10% | 97.62% | 78.57% | 97.62% | 97.62% | 88% |
| VGG-19 [ | 96.78% | 98.66% | 96.46% | – | – | – |
| EfficientNet [ | 99.62% | 99.63% | 99.63% | 99.64% | 97.62% | 99.49% |
| Proposed Model | 99.52% | 99.34% | 98.74% | 98.98% | 98.22% | 99.57% |