Literature DB >> 33968281

Prediction of COVID-19 with Computed Tomography Images using Hybrid Learning Techniques.

Varalakshmi Perumal1, Vasumathi Narayanan1, Sakthi Jaya Sundar Rajasekar2.   

Abstract

Reverse Transcription Polymerase Chain Reaction (RT-PCR) used for diagnosing COVID-19 has been found to give low detection rate during early stages of infection. Radiological analysis of CT images has given higher prediction rate when compared to RT-PCR technique. In this paper, hybrid learning models are used to classify COVID-19 CT images, Community-Acquired Pneumonia (CAP) CT images, and normal CT images with high specificity and sensitivity. The proposed system in this paper has been compared with various machine learning classifiers and other deep learning classifiers for better data analysis. The outcome of this study is also compared with other studies which were carried out recently on COVID-19 classification for further analysis. The proposed model has been found to outperform with an accuracy of 96.69%, sensitivity of 96%, and specificity of 98%.
Copyright © 2021 Varalakshmi Perumal et al.

Entities:  

Mesh:

Year:  2021        PMID: 33968281      PMCID: PMC8063851          DOI: 10.1155/2021/5522729

Source DB:  PubMed          Journal:  Dis Markers        ISSN: 0278-0240            Impact factor:   3.434


1. Introduction

The COVID-19 virus, believed to have initially originated from the Phinolophus bat, transmitted to human beings in December 2019. Wuhan city's Huanan Seafood Market was the nerve center for the COVID-19 outbreak which spread rapidly all around the world [1] and was eventually announced as a pandemic by World Health Organization (WHO) during March 2020 [2]. COVID-19-infected individuals have experienced severe acute respiratory disorders, fever, continuous coughing, and other infections. The mortality rate of this pandemic reached its peak in a short span of time. Early detection of the COVID-19 virus is the best way in mortality reduction. The CT scan images of COVID-19-affected individuals show distinctive characteristics like patchy multifocal consolidation, ground-glass opacities, interlobular cavitation, lobular septum thickening, and clear indication of fibrotic lesions, peribronchovascular, pleural effusion, and thoracic lymphadenopathy. The evolution of consolidation and ground-glass opacities over a period of time of a COVID-19-affected patient from symptom commencement to the next 31 days is delineated in Figure 1 [2-4]. RT-PCR is known to be the standard testing tool but has produced false negative rates in recent studies [5, 6] at the early stages. Studies also postulated the importance of CT scan images to screen COVID-19 with better specificity and sensitivity [7].
Figure 1

CT scan images of a COVID-19 patient as time goes by. (a) 7th Day (as soon as symptoms show up): CT scan presents opacities formed in left lower lobe and right upper lobe. (b, c) 9th Day: CT scan depicts ground-glass opacities which are bilateral and multifocal. (d) 15th Day: CT scan presents that virus has evolved into mixture of consolidations and opacities. (e) 19th Day: CT scan shows partial disappearance of ground-glass opacities and consolidations using antiviral treatments. (f) 31st Day: CT scan shows absence of pleural effusions, pulmonary cavitation, and lymphadenopathy [2–4].

The characteristics of COVID-19 are similar to other viral pneumonia [4]. Yet with help of deep learning techniques, one can predict the differences between types of viral pneumonia precisely. The main differences between pneumonia caused by different types of viruses including the Respiratory Syncytial Virus (RSV) and Human Metapneumovirus (HMPV) in terms of ground-glass opacity (GGO), consolidation, and pleural effusion are depicted in Table 1. +++ is 50% area of lungs being involved and + is 10% area of lungs being involved.
Table 1

CT findings for major types of viral pneumonia.

InfectionsTransmissionGGOConsolidationNodulePleural effusion
AdenovirusRespiratory, oral-fecal++++++Centro lobularC
InfluenzaDroplet, airborne++++UC
RSVAerosol, contact++Centrilobular +++C
SARS-CoV2Airborne, contact++++RareRare
HMPVContact, droplet++Centrilobular +++UC
The large number of CT scan images opens up a research area for start-up companies. These techniques proposed by researchers aid radiologists and physicians for fast and early prediction of the disease. RT-PCR which is used for diagnosing COVID-19 has a few limitations. Firstly, the test kits are not sufficiently available and consume more time for testing, and the sensitivity of testing varies. Thus, using CT scan images for screening COVID-19 is important. CT scans images expose patchy ground-glass opacities which are hazy white spots in the lungs, which is the primary sign of COVID-19. In a recent study [8], with 1,014 patients, deep learning technique was able to predict (888/1014) positive cases using CT scan images of suspected COVID-19 patients, while RT-PCR was only able to predict (601/1014) positive cases of suspected COVID-19 patients. The results have shown that the CT scan images were able to diagnose COVID-19 effectively thus saving more lives. The mortality rates for different CoV viruses are discussed in Table 2. There is little knowledge on what will be the future of the outbreak. There are different manifestations of COVID-19 as discussed in a study [9]. In a study [10], it was found that CT scans had a high sensitivity while diagnosing for COVID-19. CT scan of the chest is considered to be an important tool for COVID-19 detection in endemic regions. As a result of the sensitivity and specificity of CT scans, a clinical detection threshold based upon ideal CT scan imaging manifestations is now utilized in China. So, CT scan images act as a better alternative to RT-PCR testing. Thus, chest CT scan images can be utilized as a primary resource for detecting COVID-19 in endemic regions which lack access to the testing kits.
Table 2

Severe Acute Respiratory Syndrome (SARS) versus Middle East Respiratory Syndrome (MERS) versus COVID-19.

CoVYearOriginMortality rateCommunity attack rateIncubation time
SARS2002Saudi Arabia10%30%-40%4-14 days
MERS2013Saudi Arabia34%10-60%7 days
COVID-192019China3.4%4-13%6 days
This also takes less time thereby saving radiologist's time for carrying out the further treatments. The following conclusions were arrived from the researchers carried out by many studies mentioned above: The sensitivity and specificity of chest CT scans to screen COVID-19 is high. Thus, in endemic regions one can use the automated system that detects COVID-19 precisely Chest CT scan images play a vital role in monitoring and evaluating COVID-19 patients with extreme and severe respiratory symptoms. Based on CT scans, the intensity of the lung infection and the time taken by the disease to evolve were assessed and discussions on treatments were made accordingly Patients infected with COVID-19 require multiple chest CT scan images during the treatment to find the progression of the disease. Analysing multiple CT images is a time-consuming task and it cannot be completed with greater precision manually. Thus, screening many images quickly is a priority which is achieved through deep learning techniques The prime abnormalities which are developed after the onset of symptoms in COVID-19-affected patients are ground-glass opacities (GGO), consolidations, and nodules. These features are easily recognizable through deep learning techniques Early detection of COVID-19 infection is critical for treatment mitigation and safety control. When compared with RT-PCR, testing with chest CT images are more dependable, rapid, and practical methodology to scan and monitor COVID-19 patients, specifically in the hotspot regions Even when the symptoms are not visible (asymptomatic), CT findings can detect visible changes and series of abnormalities in COVID-19-affected lungs using proposed model So far, medical and clinical studies on chest CT scan findings have been discussed. In Table 3, the deep learning techniques which were carried out using images are presented.
Table 3

Comparison of studies on COVID-19 classification.

AuthorImageAccuracyClassification
Wang [11]CT82.9%Transfer learning
Zhao [12]CT85%DenseNet
Vruddhi Shah [13]CT94.52%VGG-19
He X [14]CT94%Self-trans model
Michael J. Horry [15]CT84%Fine-tuned VGG-19
Song Ying [16]CT93%Deep CNN
The accuracy of the works is also shown along with the classification methods that were used. The predominant works delineated in Table 3 show 94.52% accuracy when model was built for CT images. It is also seen that most of the models are built for X-ray images [17-26]. Studies have shown instances where patient's chest X-ray showed no traces of lung nodules but then were later identified using CT scans [13, 15]. CT images play a major role in detecting the COVID-19 infection. Hence, for the above reasons, a hybrid learning model was proposed which scans the CT images and classifies them as COVID-19, CAP, and Normal images using machine learning and deep learning techniques.

2. Materials and Methods

Figure 2 shows the overall progression of the proposed hybrid learning model. The CT scan input images are collected from various sources like Google Images, RSNA, and Github, so they are different in resolution, size, and many other features. So, all the CT scan input images are preprocessed to standardize the images and given to the pretrained deep learning models for feature extraction. The extracted features are then given to machine learning classification models. The pretrained deep learning models used in the proposed work are VGG-16, Resnet50, InceptionV3, and AlexNet. The machine learning models used in the proposed work are Support Vector Machine (SVM), Random Forest, Decision Tree, Naive Bayes, and K-Nearest Neighbour (KNN).
Figure 2

System architecture that was proposed for screening COVID-19 chest CT scans using feature extraction and classification.

2.1. Image Processing

Figure 3 shows the progression of image processing.
Figure 3

(a) Original COVID-19 CT scan image (b) Histogram equalized COVID-19 CT scan image (c) Weiner filtered COVID-19 CT scan image.

The histogram equalization is applied to enhance the quality of the image without losing the important features of the image. The histograms of the original and equalized image are shown in Figure 4. The Weiner filter is used to remove the noises from the image yet preserving fine details and edges of the lungs. The filter size is chosen to be 4 × 4 in order to prevent the image from getting over smooth. Weiner filter is typically based on estimation of variance and mean from the local adjacent of individual pixels. It then constructs pixel-based linear filters using the Eq (1). where WF(i, j) denotes the position of pixel in filtered image and O(i, j) denotes the position of pixel in the original image. μ and σ are mean and variance of local adjacent pixels, respectively. v is called the noise variance. Images are then resized to focus on a specific area of interest in order to extract its features.
Figure 4

Histogram of original and equalized images.

2.2. Feature Extraction

Feature extraction is achieved using pretrained CNN models such as VGG-16, Restnet50, InceptionV3, and AlexNet. CNN models are purposely used for image classification. An image is viewed as an array of pixel which also depends upon the resolution of an image. These CNN models consist series of convolutional and pooling layers. The data augmentation is achieved using a convolutional layer. The convolution operation is applied to a region of an image, sampling the values of the pixels in that particular region and converting them into a solitary value. This convolution operation is defined in Eq (2) and Figure 5. where E(j, j) is the value of pixel at (i, j) after convolution operation; I(a, b) is the value of pixel at (a, b) in input matrix and F(i − a, j − b) is the value of pixel at (i − a, j − b) in filter (Kernel) matrix and K is the kernel size or size of the filter matrix.
Figure 5

Convolutional and max-pooling layer intermediate image.

The output size of the convolution layer is given in Eq (3). where M is the size of the output matrix, I is the size of the input matrix, F is the size of the convolution filter, P is padding, and S is stride value for convolution operation. The max-pooling layer performs dimensionality reduction. This layer will downsample the value without losing any important information. It does max operation by finding the maximum valued neuron in a particular region for the output from the previous layer which is given in Eq (4) and Figure 5. where P(i, j) is the value of pixel at (i, j) after pooling operation is performed; E(a, b) is the value of pixel at (a, b) of preceding layer's output and M is the size of previous layer's output grid. The output size of the max-pooling layer is given in Eq (5). where N is the size of the output matrix, M is the size of the previous layer's matrix, F is the size of the pooling filter, and S is the stride value same as what was chosen for convolution operation. Relu acts as an activation for convolutional and max-pooling layer as given in Eq (6) where x is the input value provided to activate the neuron. Thus, all the parameters which were extracted from the series of convolution and pooling operations from all the pretrained models that were used for feature extraction only are shown in Table 4. One can notice that nontrainable parameters are less and only trainable parameters are used by the backpropagation algorithm to optimize and update the values of weight and bias. Thus, only the important features are utilized for training the model. The features are nonredundant and informative values are intended to facilitate precise diagnosis of classes.
Table 4

Model parameter comparison.

ModelImage sizeTotal parametersTrainable parametersNontrainable parametersNumber of layers
VGG-1614,882,8831,66,40314,716,48016224 × 224
InceptionV322,370,3395,62,69121,807,648748299 × 299
Resnet5024,155,2675,62,69123,592,57650224 × 224
AlexNet62,378,3442,29,12362,149,2218227 × 227

2.3. Classification

Classification refers to a predictive modelling problem where a class label is predicted for an input image. The classification is performed using traditional machine learning classifiers by removing the fully connected layers from the pretrained deep learning models. The extracted features were utilized for the final classification using Support Vector Machine (SVM), Decision Tree, Naive Bayes, K-Nearest Neighbour (KNN), and Random Forest. In SVM, the input values are plotted in an n-dimensional space, and the optimal hyperplane that differentiates the classes is found. In Random Forest, a large number of decision trees are built to operate as an ensemble model where all decision trees predict the class label and eventually the class that gets more votes will be chosen as the predicted label. In Decision Tree, each node acts as a splitting criterion and the branches lead to the final node (leaf node) to provide the output. Naive Bayes is a conditional probability model which used the Bayes theorem for classification. KNN is a nonparametric classifier which classifies images based on its k-nearest neighbours.

3. Results and Discussion

In this section, datasets that have been utilized for carrying out the experiments are discussed. Further, the comparative analysis of results is discussed.

3.1. Data Formulation

The dataset used here contains CT scan images for COVID-19 (includes both symptomatic and asymptomatic), CAP, and normal chest CT scan images. The images were assimilated from multiple resources for training the model precisely. The data collected from different resources are shown in Table 5. Scanning scheme used for scanning the image is diverse thus the model is able to learn all possible images. Image preprocessing has been applied to make the dataset a standardized one. A total of approximately 500 CT scan images were obtained for each class to maintain the data balance. The images were split for training, validation, and testing purposes which are shown in Table 6. The project was conducted on windows platform using the Python software (Python Jupyter Notebook). Different packages like pandas for data loading and data accessing, numpy for array (matrix) creation, scikit-learn for machine learning classifiers, Keras's Tensorflow for deep learning classifiers, and matplotlib for plotting graphs are used in the implementation of the proposed work. These tools have been helpful in completely satisfying the requirements producing promising results.
Table 5

Multisource data assimilation for COVID-19, CAP, and normal CT scan images.

CategoriesSourceImages
COVID-19medRXiv, bioRxiv, NEJM, JAMA, Lancet Medical Segmentation349
Coronacases100
Radiopaedia10
Zenodo9
Total488
CAPGoogle Images, RSNA500
NormalGoogle Images, Github500
Table 6

Data for training, testing, and validation.

ClassTrainingValidationTestingTotal
COVID-1934037111488
CAP34049111500
Normal34049111500

3.2. Experimental Results

The COVID-19 images are correctly classified by the present model with greater precision and recall. Initially, about 111 images were tested for machine learning models like Support Vector Machine (SVM), Decision Tree, Naive Bayes, K-Nearest Neighbour (KNN), and Random Forest. Secondly, the images were trained and tested for deep learning models such as CNN, AlexNet, VGG-16, InceptionV3, and Resnet50. On further analysis, the fully connected layers for CNN models were removed, and the prediction was performed with machine learning models as hybrid learning models. This showed that the hybrid learning models such as AlexNet+SVM and AlexNet+Random Forest models yielded better results when compared with other models. Figure 6 shows the colormap images for COVID-19-affected CT scan images which were correctly classified by AlexNet+SVM and AlexNet+Random Forest. Figure 7 shows the correctly classified CAP images, and Figure 8 shows the correctly classified normal CT scan images. These images in Figures 6–8 show the infected region in CT scan images which are then classified as CAP or COVID-19. The normal CT scan image does not have any infected region pointed in the image. The COVID-19 image shows an infected region in the left lower lobe region. This identification of the infected region is performed using Jet Colormap and Turbo heat map provided in python.
Figure 6

Colormap for COVID-19-affected chest CT scan images correctly classified by AlexNet+SVM and AlexNet+Random Forest.

Figure 7

Colormap for CAP-affected chest CT scan images correctly classified by AlexNet+SVM and AlexNet+Random Forest.

Figure 8

Colormap for normal chest CT scan images correctly classified by AlexNet+SVM and AlexNet+Random Forest.

To compare this work with RT-PCR, 12 sample images of 3 patients are taken to test the model. All these images in Figure 9 are classified correctly by AlexNet+SVM and AlexNet+Random Forest, which are found to be negative by RT-PCR. The infected regions are also shown in the images using the colormap function provided by python.
Figure 9

Images that were tested as negative by RT-PCR were actually positive cases and were correctly predicted as positive by the proposed work.

Various metrics used to analyse different models are discussed below. F1-score, precision, and recall are defined in Eq (7), Eq (8), and Eq (9). Accuracy of a model shows how correctly the images are classified. The precision of the model determines the reproducibility of values or how many values are predicted correctly. Recall of a model shows how many correct values are discovered among all classes. F1-score takes precision and recall into account in order to calculate a balanced average value. where precision and recall are defined in Eq (8) and Eq (9). These values are in fact calculated from a Confusion matrix that is built using test data images. where T is the number of images observed as positive and predicted as positive and F is the number of images observed as negative and predicted as positive. where T is the number of images observed as positive and predicted as positive and F is the number of images observed as positive and predicted as negative. Recall is also called sensitivity. The specificity is defined in Eq (10). where T is the number of images observed as negative and predicted as negative and F is the number of images observed as negative and predicted as positive. The accuracy for all the models can be calculated by Eq (11). where T is the number of images observed as negative and predicted as negative. The Root Mean Square Error (RMSE) value for all the images can be evaluated using Eq (12). where y is the actual value, y′ is the predicted value, and n is the total number of images. The Mean Absolute Error (MAE) can be calculated using [26]. where y is actual value, y′ is predicted value, and n is the total number of images. The Confusion matrix is often used to analyse the performance of the classification models using predicted class label for test images against known class label for test images. Classification report is used to evaluate the quality of prediction of class labels by classification models. The Confusion matrix and classification report for the models built using conventional machine learning classifiers are presented in Table 7. It is obvious that Random Forest has produced better results with the precision of 0.95, recall of 0.96, and specificity of 0.97 when compared with other machine learning classifiers. The Confusion matrix and classification report for models constructed using deep learning techniques are analysed and shown in Table 8. It is seen that AlexNet has produced better prediction outcomes with the precision of 0.94, recall of 0.94, and specificity of 0.97. The Confusion matrix and classification report for the proposed hybrid learning models are presented in Tables 9–13. The proposed works performed better than other classifiers. AlexNet+SVM has produced better results with the precision of 0.96, recall of 0.96, and specificity of 0.98 when tested for 333 test images. Resnet50+Random Forest has also produced better outcomes with precision, recall, and specificity of 0.95, 0.95, and 0.97, respectively. The feature extraction produces only necessary features to be trained and remove unnecessary features that are not vital for the classification task. This also helps the model to be faster in training and testing the classification models.
Table 7

Confusion matrix and classification report for different machine learning classifiers.

Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity
SVMCOVID-1910470111COVID-190.930.920.920.96
CAP71032111CAP0.920.880.890.96
Normal16104111Normal0.930.980.950.96
Total112116106333Average0.920.930.920.96

Random ForestCOVID-1910650111COVID-190.950.970.950.97
CAP31062111CAP0.950.950.950.97
Normal06105111Normal0.950.980.950.97
Total 109 111 107 333 Average 0.95 0.96 0.95 0.97

Decision TreeCOVID-1910452111COVID-190.930.930.930.96
CAP31035111CAP0.920.910.910.96
Normal44103111Normal0.920.930.920.96
Total111112110333Average0.920.920.920.96

Naive BayesCOVID-1982209111COVID-190.730.860.790.87
CAP88320111CAP0.740.620.670.86
Normal52977111Normal0.690.720.700.87
Total95132106333Average0.720.730.720.87

KNNCOVID-1910470111COVID-190.930.910.910.97
CAP61032111CAP0.920.880.890.97
Normal46101111Normal0.900.980.930.96
Total114117103333Average0.920.920.920.97
Table 8

Confusion matrix and classification report for various deep learning classifiers.

Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity
CNNCOVID-1910056111COVID-190.900.900.900.95
CAP51014111CAP0.910.890.890.95
Normal5799111Normal0.890.900.890.95
Total111113109333Average0.900.900.900.95

AlexNetCOVID-1910533111COVID-190.940.940.940.97
CAP21063111CAP0.950.950.950.97
Normal42104111Normal0.930.930.930.97
Total 111 111 111 333 Average 0.94 0.94 0.94 0.97

VGG-16COVID-1910443111COVID-190.930.930.930.96
CAP31044111CAP0.930.930.930.95
Normal43104111Normal0.930.930.930.96
Total111111111333Average0.930.930.930.96

Resnet50COVID-1910263111COVID-190.920.930.920.96
CAP51015111CAP0.910.900.900.95
Normal36102111Normal0.920.930.920.96
Total110113110333Average0.920.920.920.96

Inception v3COVID-1910047111COVID-190.900.890.890.94
CAP61005111CAP0.900.910.900.95
Normal6699111Normal0.890.890.890.95
Total112110111333Average0.900.900.900.95
Table 9

Confusion matrix and classification report for the proposed work, CNN for feature extraction, and various machine learning models for classification.

Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity
CNN+SVMCOVID-1910443111COVID-190.930.930.930.97
CAP31044111CAP0.930.930.930.97
Normal43104111Normal0.930.930.930.97
Total111113109333Average0.930.930.930.97

CNN+Random ForestCOVID-1910434111COVID-190.940.930.930.96
CAP31035111CAP0.930.930.930.96
Normal55101111Normal0.910.920.910.97
Total112`111110333Average0.930.930.930.96

CNN+Decision TreeCOVID-1910146111COVID-190.910.920.920.95
CAP31017111CAP0.910.920.920.96
Normal55101111Normal0.910.890.900.96
Total109110114333Average0.910.910.910.96

CNN+Naive BayesCOVID-1910434111COVID-190.940.930.930.91
CAP31035111CAP0.930.930.930.92
Normal55101111Normal0.910.920.910.91
Total112111110333Average0.930.930.930.91

CNN+KNNCOVID-1910254111COVID-190.920.910.910.95
CAP41025111CAP0.910.910.910.95
Normal54102111Normal0.910.910.910.96
Total111111110333Average0.910.910.910.95
Table 10

Confusion matrix and Classification report for the proposed work, AlexNet for feature extraction, and various machine learning models for classification.

Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity
AlexNet+SVMCOVID-1910740111COVID-190.960.960.960.98
CAP11082111CAP0.950.970.960.97
Normal13107111Normal0.980.960.970.98
Total 109 113 109 333 Average 0.96 0.96 0.96 0.98

AlexNet+Random ForestCOVID-1910254111COVID-190.920.920.910.96
CAP41025111CAP0.910.910.910.98
Normal54102111Normal0.910.910.910.98
Total111111110333Average0.910.910.910.98

AlexNet+Decision TreeCOVID-1910443111COVID-190.940.940.940.97
CAP31034111CAP0.930.930.930.97
Normal43104111Normal0.930.930.930.96
Total111111111333Average0.940.940.940.97

AlexNet+Naive BayesCOVID-1993810111COVID-190.840.820.830.91
CAP10938111CAP0.840.830.830.90
Normal101190111Normal0.810.830.820.91
Total113112108333Average0.830.830.830.91

AlexNet + KNNCOVID-1910434111COVID-190.940.940.940.94
CAP410340111CAP0.930.940.930.93
Normal34104111Normal0.940.930.930.94
Total111110112333Average0.940.940.940.94
Table 11

Confusion matrix and classification report for proposed work, VGG-16 for feature extraction, and different machine learning models for classification.

Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity
VGG-16+SVMCOVID-1910524111COVID-190.950.950.950.97
CAP31053111CAP0.940.950.940.97
Normal24105111Normal0.940.950.940.97
Total110111112333Average0.940.940.940.97

VGG-16+Random ForestCOVID-1910623111COVID-190.950.950.960.97
CAP31062111CAP0.950.950.950.97
Normal33105111Normal0.940.950.940.97
Total112111110333Average0.950.950.950.97

VGG-16+Decision TreeCOVID-1910443111COVID-190.940.940.940.96
CAP31034111CAP0.930.940.930.96
Normal34104111Normal0.940.930.930.96
Total111110112333Average0.940.940.930.96

VGG-16+Naive BayesCOVID-1994710111COVID-190.850.860.860.93
CAP79410111CAP0.850.850.850.92
Normal8994111Normal0.850.820.840.93
Total109110114333Average0.850.840.850.92

VGG-16+KNNCOVID-1910344111COVID-190.930.940.930.96
CAP31034111333CAP0.940.930.930.96
Normal44103111Normal0.930.940.930.96
Total110112110333Average0.930.940.930.96
Table 12

Confusion matrix and classification report for proposed work, Resnet50 for feature extraction, and machine learning models for classification.

Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity
Resnet50+SVMCOVID-1910443111COVID-190.940.950.940.94
CAP31044111CAP0.940.930.930.93
Normal34104111Normal0.940.940.940.94
Total110112111333Average0.940.940.940.94

Resnet50+Random ForestCOVID-1910533111COVID-190.950.950.950.97
CAP31044111CAP0.940.950.950.97
Normal33105111Normal0.940.940.950.97
Total 111 110 111 333 Average 0.95 0.95 0.95 0.97

Resnet50+Decision TreeCOVID-1910245111COVID-190.920.930.930.96
CAP41034111CAP0.930.920.930.96
Normal45102111Normal0.920.920.920.96
Total11011111333Average0.920.920.920.96

Resnet50+Naive BayesCOVID-199777111COVID-190.870.870.870.94
CAP7968111CAP0.880.860.870.94
Normal8697111Normal0.860.860.860.93
Total112109112333Average0.870.860.860.94

Resnet50+KNNCOVID-1910254111COVID-190.920.920.920.95
CAP51015111CAP0.910.920.910.96
Normal44103111Normal0.930.920.930.96
Total111110112333Average0.920.920.920.96
Table 13

Confusion matrix and classification report for proposed work, Inception V3 for feature extraction, and various machine learning models for classification.

Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity
IncpetionV3+SVMCOVID-1910344111COVID-190.930.930.930.95
CAP41025111CAP0.920.930.920.96
Normal44103111Normal0.930.920.940.96
Total11111011333Average0.930.930.930.96

Inception V3 + Random ForestCOVID-1910254111COVID-190.920.920.920.96
CAP41034111CAP0.910.910.910.96
Normal55101111Normal0.910.920.910.95
Total111113109333Average0.910.920.920.96

InceptionV3+Decision TreeCOVID-1910155111COVID-190.910.910.910.95
CAP51015111CAP0.910.900.910.95
Normal56100111Normal0.900.900.900.95
Total111112110333Average0.910.900.910.95

InceptionV3+Naive BayesCOVID-199696111COVID-190.860.860.860.93
CAP6969111CAP0.860.850.860.93
Normal7995111Normal0.890.880.880.93
Total108112107333Average0.870.870.870.93

InceptionV3+KNNCOVID-1910155111COVID-190.910.920.910.95
CAP41025111CAP0.920.910.920.95
Normal55101111Normal0.960.910.910.95
Total110112111333Average0.910.920.920.95
The outcomes of the models when trained for images before preprocessing and after preprocessing are also compared. There is a visible difference in results and it shows the significance of preprocessing the images. This analysis report is presented in Table 14. When comparing the outcome with studies that are featured in Table 3, the presented hybrid learning models have produced better results.
Table 14

Performance analysis with and without image preprocessing.

ModelWith preprocessingWithout preprocessing
AccuracyF1-scoreMAERMSESpecificityAccuracyF1-scoreMAERMSE+Specificity
SVM93.39%0.920.2290.0540.9691.11%0.910.2980.0890.94
Random Forest95.19%0.950.2180.0490.9794.23%0.940.2270.0540.96
Decision Tree93.12%0.920.2620.0630.9692.02%0.920.2650.0770.94
Naive Bayes72.69%0.720.7950.3960.8769.63%0.690.8030.4120.78
KNN92.49%0.920.2260.0510.9790.15%0.900.3210.0930.92
CNN90.01%0.900.3140.0870.9589.45%0.890.3560.0970.92
AlexNet94.59%0.940.2060.0610.9793.78%0.930.2260.0490.94
VGG-1693.69%0.930.2270.0510.9691.82%0.920.2250.0510.93
Resnet5091.59%0.910.2720.0770.9691.18%0.910.2990.0650.93
InceptionV389.78%0.890.3130.0820.9587.43%0.860.3910.0990.94
CNN+SVM91.12%0.910.2810.0820.9788.73%0.890.3660.0960.93
CNN+Random Forest92.49%0.930.2270.0520.9789.99%0.900.3250.0880.93
CNN+Decision Tree90.99%0.910.2710.0780.9688.31%0.880.3470.0930.93
CNN+Naive Bayes82.85%0.830.4560.1230.9179.56%0.800.4780.1780.87
CNN+KNN91.89%0.920.2530.0610.9589.56%0.890.3120.0790.90
AlexNet+SVM 96.69% 0.97 0.217 0.043 0.98 95.12% 0.95 0.217 0.047 0.93
AlexNet+Random Forest96.09%0.960.2250.0490.9895.11%0.950.2130.0470.95
AlexNet+Decision Tree93.09%0.930.2250.0500.9792.45%0.920.2230.0530.92
AlexNet+Naive Byes83.13%0.830.4210.0990.9180.55%0.810.4920.1530.86
AlexNet+KNN93.39%0.930.2200.0550.9490.91%0.910.2790.0500.91
VGG-16+SVM94.59%0.950.2050.0610.9793.69%0.930.2210.0450.93
VGG-16+Random Forest95.19%0.950.2000.0540.9793.34%0.930.220.0490.94
VGG-16+Decision Tree93.39%0.930.2140.0430.9691.23%0.910.2770.0800.93
VGG-16+Naive Bayes84.68%0.850.4190.0830.9282.87%0.830.4550.1220.88
VGG-16+KNN93.09%0.930.2610.0620.9692.45%0.920.2270.0530.92
Resnet50+SVM93.69%0.940.2270.0500.9791.78%0.930.2200.0480.94
Resnet50+Random Forest94.29%0.940.2010.0590.9786.45%0.860.3990.0870.93
Resnet50+Decision Tree92.19%0.910.2780.0790.9689.10%0.890.3690.1010.93
Resnet50+Naive Bayes87.08%0.870.3890.1000.9485.18%0.850.4020.0770.90
Resnet50+KNN91.89%0.920.2490.0580.9688.99%0.890.3370.0880.91
InceptionV3+SVM92.79%0.930.2200.0470.9989.99%0.900.3190.0910.92
InceptionV3+Random Forest91.89%0.920.2360.0590.9687.91%0.880.3200.0790.93
InceptionV3+Decision Tree90.69%0.910.2660.0700.9588.45%0.880.3400.0910.91
InceptionV3+Naive Bayes86.18%0.860.4110.0780.9384.72%0.850.4160.0810.91
InceptionV3+KNN91.29%0.910.2880.0750.9590.11%0.900.3110.0900.93
Thus, the prediction of COVID-19 using the classification model has been constructed in a robust way and it helps in quicker prediction of COVID-19. AlexNet model takes 13 minutes 25 seconds for training and 6 minutes 38 seconds for testing. VGG-16 model takes 20 minutes 43 seconds for training and 12 minutes 30 seconds for testing. InceptionV3 model takes 34 minutes 12 seconds for training and 20 minutes 12 seconds for testing. Resnet50 model takes 43 minutes for training and 21 minutes for testing. As the model gets deeper, it takes more time to train and test the images. The time taken to run the models is inversely proportional to a number of layers. RT-PCR which is used as a standard reference takes 1-2 days in India for confirming a patient to be infected by COVID-19 or not. When compared to RT-PCR, the present model aids in quicker prediction and can aid radiologist in carrying out further treatment and procedures. The accuracy of this model is also quite promising to perform the prediction when compared with RT-PCR. The CT scan images that were tested as negative by RT-PCR are also correctly predicted by these models. Often, the medical images are unclear with lesions and tissues being captured in CT scan images which can impede the prediction task. In order to overcome these difficulties, various image preprocessing techniques were applied. The image preprocessing techniques that are incorporated has an impact on accuracies and results. These techniques provide better resolution, high quality, and high-definition images for carrying out the prediction. In conclusion, the models presented in this study have produced better results in terms of outcomes (accuracy) and quicker prediction even in the early stages. In short, the proposed work can be used for this global public health emergency situation which requires immediate attention.

4. Conclusion

Early detection of COVID-19 is vital for treating and isolating the patients in order to avoid the spread of the virus. RT-PCR is contemplated as the standard technique, but it is reported that chest CT could be used as a rapid and reliable approach for scanning of COVID-19. The proposed hybrid learning models are able to detect COVID-19 with chest CT scan images with an accuracy of 96.69%, sensitivity of 96%, and specificity of 98% for AlexNet+SVM model. Even though there is overlap in patterns of abnormalities in CAP- and COVID-19-affected CT scans, these models are capable of performing well with greater accuracy, sensitivity, and specificity using multisource data assimilation. Finally, reliable models are proposed to distinguish COVID-19 and CAP from CT scan images.
  11 in total

1.  Time Course of Lung Changes at Chest CT during Recovery from Coronavirus Disease 2019 (COVID-19).

Authors:  Feng Pan; Tianhe Ye; Peng Sun; Shan Gui; Bo Liang; Lingli Li; Dandan Zheng; Jiazheng Wang; Richard L Hesketh; Lian Yang; Chuansheng Zheng
Journal:  Radiology       Date:  2020-02-13       Impact factor: 11.105

2.  Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR.

Authors:  Yicheng Fang; Huangqi Zhang; Jicheng Xie; Minjie Lin; Lingjun Ying; Peipei Pang; Wenbin Ji
Journal:  Radiology       Date:  2020-02-19       Impact factor: 11.105

3.  Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases.

Authors:  Tao Ai; Zhenlu Yang; Hongyan Hou; Chenao Zhan; Chong Chen; Wenzhi Lv; Qian Tao; Ziyong Sun; Liming Xia
Journal:  Radiology       Date:  2020-02-26       Impact factor: 11.105

4.  Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study.

Authors:  Heshui Shi; Xiaoyu Han; Nanchuan Jiang; Yukun Cao; Osamah Alwalid; Jin Gu; Yanqing Fan; Chuansheng Zheng
Journal:  Lancet Infect Dis       Date:  2020-02-24       Impact factor: 25.071

5.  Diagnosis of COVID-19 using CT scan images and deep learning techniques.

Authors:  Vruddhi Shah; Rinkal Keniya; Akanksha Shridharani; Manav Punjabi; Jainam Shah; Ninad Mehendale
Journal:  Emerg Radiol       Date:  2021-02-01

6.  Use of Chest CT in Combination with Negative RT-PCR Assay for the 2019 Novel Coronavirus but High Clinical Suspicion.

Authors:  Peikai Huang; Tianzhu Liu; Lesheng Huang; Hailong Liu; Ming Lei; Wangdong Xu; Xiaolu Hu; Jun Chen; Bo Liu
Journal:  Radiology       Date:  2020-02-12       Impact factor: 11.105

7.  A Novel Coronavirus from Patients with Pneumonia in China, 2019.

Authors:  Na Zhu; Dingyu Zhang; Wenling Wang; Xingwang Li; Bo Yang; Jingdong Song; Xiang Zhao; Baoying Huang; Weifeng Shi; Roujian Lu; Peihua Niu; Faxian Zhan; Xuejun Ma; Dayan Wang; Wenbo Xu; Guizhen Wu; George F Gao; Wenjie Tan
Journal:  N Engl J Med       Date:  2020-01-24       Impact factor: 91.245

8.  Evolution of Computed Tomography Manifestations in Five Patients Who Recovered from Coronavirus Disease 2019 (COVID-19) Pneumonia.

Authors:  Qiulian Sun; Xinjian Xu; Jicheng Xie; Jingjing Li; Xiangzhong Huang
Journal:  Korean J Radiol       Date:  2020-03-13       Impact factor: 3.500

9.  Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.

Authors:  Ioannis D Apostolopoulos; Tzani A Mpesiana
Journal:  Phys Eng Sci Med       Date:  2020-04-03
View more
  3 in total

Review 1.  Role of Artificial Intelligence in COVID-19 Detection.

Authors:  Anjan Gudigar; U Raghavendra; Sneha Nayak; Chui Ping Ooi; Wai Yee Chan; Mokshagna Rohit Gangavarapu; Chinmay Dharmik; Jyothi Samanth; Nahrizul Adib Kadri; Khairunnisa Hasikin; Prabal Datta Barua; Subrata Chakraborty; Edward J Ciaccio; U Rajendra Acharya
Journal:  Sensors (Basel)       Date:  2021-12-01       Impact factor: 3.576

2.  Internet of Medical Things-Based COVID-19 Detection in CT Images Fused with Fuzzy Ensemble and Transfer Learning Models.

Authors:  Chandrakanta Mahanty; Raghvendra Kumar; S Gopal Krishna Patro
Journal:  New Gener Comput       Date:  2022-06-16       Impact factor: 1.180

Review 3.  Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks.

Authors:  Haseeb Hassan; Zhaoyu Ren; Huishi Zhao; Shoujin Huang; Dan Li; Shaohua Xiang; Yan Kang; Sifan Chen; Bingding Huang
Journal:  Comput Biol Med       Date:  2021-12-18       Impact factor: 6.698

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.