Literature DB >> 33192206

The ensemble deep learning model for novel COVID-19 on CT images.

Tao Zhou1,2, Huiling Lu3, Zaoli Yang4, Shi Qiu5, Bingqiang Huo1, Yali Dong1.   

Abstract

The rapid detection of the novel coronavirus disease, COVID-19, has a positive effect on preventing propagation and enhancing therapeutic outcomes. This article focuses on the rapid detection of COVID-19. We propose an ensemble deep learning model for novel COVID-19 detection from CT images. 2933 lung CT images from COVID-19 patients were obtained from previous publications, authoritative media reports, and public databases. The images were preprocessed to obtain 2500 high-quality images. 2500 CT images of lung tumor and 2500 from normal lung were obtained from a hospital. Transfer learning was used to initialize model parameters and pretrain three deep convolutional neural network models: AlexNet, GoogleNet, and ResNet. These models were used for feature extraction on all images. Softmax was used as the classification algorithm of the fully connected layer. The ensemble classifier EDL-COVID was obtained via relative majority voting. Finally, the ensemble classifier was compared with three component classifiers to evaluate accuracy, sensitivity, specificity, F value, and Matthews correlation coefficient. The results showed that the overall classification performance of the ensemble model was better than that of the component classifier. The evaluation indexes were also higher. This algorithm can better meet the rapid detection requirements of the novel coronavirus disease COVID-19.
© 2020 Published by Elsevier B.V.

Entities:  

Keywords:  COVID-19; Deep learning; Ensemble learning; Lung CT images

Year:  2020        PMID: 33192206      PMCID: PMC7647900          DOI: 10.1016/j.asoc.2020.106885

Source DB:  PubMed          Journal:  Appl Soft Comput        ISSN: 1568-4946            Impact factor:   6.725


Introduction

A sudden outbreak of pneumonia of unknown cause started in December 2019, in Wuhan, China. On February 7, 2020, the Chinese National Health Commission tentatively named this virus-induced pneumonia as “novel coronavirus pneumonia”. Due to the high infection rates of this virus, it has rapidly spread all over the world. On February 11, 2020, the World Health Organization (WHO) named it coronavirus disease (COVID-19) [1]. On April 18, 2020, the World Health Organization (WHO) declared COVID-19 an epidemic. According to the Global COVID-19 Real Time Query System of Hopkins University [2] diagnosed in China. 2,179,992 cases of COVID-19 have been diagnosed in many countries other than China, with a total of 150,212 deaths. The disease is caused by the severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2). The common symptoms of infected patients are fever, cough, sputum, shortness of breath, chest tightness, fatigue, and dizziness. Patients with severe pneumonia will have difficulty breathing or hypoxemia after a week of symptoms. Rapid disease progression can lead to acute respiratory distress syndrome, septic shock, metabolic acidosis, and coagulopathy [3]. Since COVID-19 is highly contagious [4], [5], it is vital to detect viruses quickly and accurately to prevent propagation and provide timely treatment for the disease. Common detection methods of COVID-19 include nucleic acid reagent detection and CT examination. Clinical studies shown high rates of false-negatives on suspected patients at the first attempt with nucleic acid reagent detection. Additionally, there are problems such as high detection environment requirements, strict and time-consuming procedures, which are not easy to adopt on a large scale. Computer tomography (CT) detection of COVID-19 possesses high sensitivity, low misdiagnosis rate, and high commercial availability [6]. It can thus complement the nucleic acid reagent detection [7]. The early CT image of lungs in COVID-19 patients presents mainly ground-glass opaque shadows [8], and a “crazy paving pattern” is evident [9]. A few days later, the density of the lesions increase significantly and halo and reversed-halo signs appear [10]. With the aggravation of the disease, bilateral lung lesions resemble “white lungs” [11]. In the later stage, the density of the lesion gradually decreases, and the area of the lesion narrows. According to the characteristics of the lung CT images of COVID-19 patients, the disease can be divided into four stages: early, progressive, peak, and absorption [12]. Fig. 1 shows the main CT image features of COVID-19.
Fig. 1

Image features of CT image of COVID-19.

Table 1 shows the current clinical value of lung CT images on COVID-19.
Table 1

The clinical value of chest CT in COVID-19.

AuthorConclusion
Fan L [13]Summarizes the CT imaging characteristics of COVID-19, clinical category, CT imaging performance of COVID-19 children, CT imaging difference between COVID-19 patients and other pulmonary inflammation patients.
Iwasawa T [14]Ultra-high-resolution CT image can identify the terminal bronchiole in normal lungs. U-HR-CT can be used to detect abnormal lung volume reduction, which is essential for the early diagnosis and timely treatment of critical illness in COVID-19 patients.
Song F [15]The most common feature of COVID-19 on CT images are pure ground-glass opacity (GGO). If patients present GGO in the peripheral and posterior lungs on chest CT images as well as cough and/or fever, a history of epidemic exposure, and normal or decreased white blood cells, then COVID-19 infection is highly suspected.
K. Wang [16]The author made a summary of the location, distribution, morphology, and density of the lesions in CT images of 114 COVID-19 patients. SPO2 and lymphocytes can reflect lung inflammation. The diagnostic sensitivity and accuracy of spiral CT testing was higher than nucleic acid detection. This method can be applied to early diagnosis and treatment of COVID-19 patients.
Huanhuan Liu [17]Chest CT image features of pregnant women with COVID-19 pneumonia were atypical. It was observed from the CT images that the lungs of pregnant patients were more susceptible to the disease. The CT image features of children were non-specific. Therefore, the combination of other diagnostic methods can be used to diagnose children.
Agostini A [18]Because of radiation exposure and motion artifacts in CT images, patients need to be imaged multiple times. The author performed ultra-low-dose, dual-source, rapid CT imaging on 10 patients with confirmed COVID-19. This image method can provide a reliable diagnosis and can reduce motion artifacts and dose.
Image features of CT image of COVID-19. At present, COVID-19 has already spread to all over the world. With the sharp increase in COVID-19 patients, existing medical resources and diagnostic capabilities are insufficient. In addition, staff density in hospitals in core epidemic areas has increased, along with the risk of cross-infection. The development of COVID-19 computer-aided diagnosis models based on CT images of lungs is thus increasingly important. According to the National Health Commission of the People’s Republic of China (2020), “Diagnosis and protocol of COVID-19”, trial version 6 [19], CT detection is not only one of the diagnostic standards for COVID-19, but also has an important significance in the treatment of COVID-19. Ai, T. [20] found that chest CT had a higher sensitivity for diagnosing COVID-19 over RT-PCR. Chest CT may thus be considered as a primary tool for COVID-19 detection in epidemic areas. Himoto Y. [21] pointed out that CT images can distinguish COVID-19 from other similar respiratory diseases. Lin Li [22] proposed a COVNet model based on deep learning to distinguish COVID-19 from community-acquired pneumonia (CAP) in chest CT images. CT images were segmented via U-net, then COVNet distinguished between COVID-19 and CAP. Wang [23] proposed a fully automatic deep learning system for COVID-19 diagnosis and prognosis based on CT image analysis. First, they made a DenseNet121-FPN for lung segmentation in chest CT images and the proposed novel COVID-19Net for COVID-19 diagnosis and prognosis. This system can categorize patients into low- and high-risk groups according to the severity of disease, and can automatically identify the lesion area. COVID-19 computer-aided diagnosis combined with deep learning and lung CT images plays an important role in quickly classifying and identifying COVID-19, improving diagnosis efficiency, saving doctors’ energy, and optimizing medical resources. The clinical value of chest CT in COVID-19.

Basic knowledge

Convolutional neural networks

Convolutional neural networks (CNNs) are multi-layer networks composed of overlapping convolutional layers for feature extraction and down sampling layers for feature processing. Fig. 2 shows the structure of a typical convolutional neural network. CNNs can automatically extract features from images, and thus have become a research hot topic. CNNs are perceptron-based models [24]. Their advantage is that they can directly receive original images to avoid excessive image preprocessing. Through the local receptive field, weight sharing, and pooling functions, CNNs can lower the complexity of models, by making full use of the local and global information of the image, they are therefore capable of robust translation, rotation, and scaling. Classic CNNs include: residual neural network (ResNet), AlexNet, and GoogleNet.
Fig. 2

Convolutional neural network structure diagram.

Convolutional neural network structure diagram.

AlexNet

AlexNet [25] was designed by the 2012 ImageNet competition winners, Hinton and Alex Krizhevsky. AlexNet uses 5 convolutional layers, including three pooling layers and two norm layers. Three of them are fully connected layers with a total of 60 million parameters. Fig. 3 shows the specific network parameters. Each input image is scaled to 256 256, and square blocks (of 224 224) are randomly intercepted from them, and then input in three dimensions of RGB. Due to GPU performance limitations at the time, Alex Krizhevsky et al. Processed AlexNet in two GPUs in parallel, therefore the hidden layer in the figure is shown as two simultaneous calculations. The five layers in front of the AlexNet network are convolutional layers.
Fig. 3

AlexNet network structure diagram.

Taking the first layer as an example, 96 feature maps of 55 55 nodes are generated. Each feature map is composed of a convolution kernel with a size of 11 11 and step size of four. After convolution filtering, the output excitation of the convolution layer is obtained through a ReLU activation function and then output to the next convolutional layer after the local response normalization and maximum pooling down-sampling. A three-layer fully connected network is added as a classifier for the five-layer convolutional network, the high-dimensional convolutional features are classified to obtain a class label. The fully connected network finally outputs a response with a dimension of 1000, corresponding to the 1000 categories of images to be classified. AlexNet network structure diagram.

GoogleNet

GoogleNet [26] consists of a cascade of multiple basic inception modules, having a depth of 22 layers. Fig. 4 shows the structure of GoogleNet. Features are extracted at different scales from the previous input layer with three different convolution kernels of different sizes. The information is combined and passed to the next layer. Inception has 13, and 5 5 convolution kernels, of which the 1 1 convolution kernel has a lower dimension than the previous layer, which is mainly used for data dimensionality reduction. This is passed to the latter 3 3 and 5 5 convolutional layers to reduce their convolution calculations and avoid large calculations caused by increasing the network scale. After combining the features of the four channels, the next layer can extract more useful features from different scales.
Fig. 4

GoogleNet network structure diagram.

GoogleNet network structure diagram.

Residual neural network

The residual neural network (ResNet) is composed of a convolutional layer for feature extraction and a pooling layer for feature processing. A multi-layer neural network solves the problem of degradation and gradient disappearance. With the deepening of the network, the gradient of the convolutional neural network gradually disappears, and the shallow parameters cannot be updated. The structure of shortcut connections ensures the update of the backpropagation parameters and avoids the problem of gradient disappearance caused by backpropagation. The residual neural network makes it easier to optimize deep models. After the input image undergoes several convolution and pooling operations, the classification effect is achieved in the fully connected layer network. Layer connection implements identity mapping [27]. The identity mapping ensures that the network performance will not degrade, so that the network learns new features based on the input features, and the identity mapping does not add an extra to the network. The parameters and calculation amount can accelerate the training speed of the model and optimize the training effect.

Transfer learning

Transfer learning [28] is a machine learning method that uses existing knowledge to solve problems in different, albeit related fields. It relaxes two basic assumptions in traditional machine learning with the purpose is to transfer existing knowledge to solve the learning problem in the target field where there is none, or only a small amount of labeled sample data. Transfer learning exists widely in all human activities. The more factors are shared between two different fields, the easier transfer learning is. Otherwise, it can be more difficult, or even negative transfer can occur to a deleterious effect. The purpose of this technique is to solve the learning problem of insufficient training samples in the target domain and make it possible to transfer some knowledge acquired in other related source domains to the target domain. According to whether the samples are marked in the source and target fields and whether the tasks are the same [29], transfer learning can be divided into inductive and unsupervised transfer learning. According to the content of transfer learning, methods can be divided into feature representation, transfer, instance transfer, parameter transfer, and association relationship transfer. According to whether the feature space of the source domain and the target domain are the same, it can be divided into homogeneous transfer learning and heterogeneous transfer learning. Lung CT images.

Ensemble learning

Ensemble learning [30] methods mainly include bagging,boosting, and stacking. Ensemble learning can significantly improve the generalization ability of the learning system. At present, the common methods for generating base classifiers can be divided into two categories: one is to apply different types of learning algorithms to the same data set [31]. The base classifiers obtained by this method are usually different. A heterogeneous classifier. The other is to apply the same learning algorithm to different training sets. The base classifier obtained by this method is called a homogeneous classifier. The combination strategy of integrated learning on classifiers mainly includes average, voting, and learning methods. According to the different uses of integrated learning, different combination methods are usually selected. For example, if the purpose of integrated learning is regression estimation, the prediction results of each individual learner are usually averaged or weighted average. If integrated learning is used for classification, each individual classification result is voted to obtain the final classification result. The voting method is divided into absolute and relative majority voting methods. Absolute majority voting method, that is, where more than half of individual learners output the same classification result, the result is the final classification result of integrated learning output. In the relative majority voting method, the number of individual learners that output a certain classification result is the largest, and the result is the final classification result of the integrated learning output.

Ensemble deep learning model for novel COVID-19 on CT images

Some new methods are used for image classification, for example, enhanced learning, GA-SVM [32], and dense-MobileNet models [33]. We propose an ensemble deep learning model for novel COVID-19 on CT images. The overview of the model is as follows: (1) Data collection. 2933 lung CT images of COVID-19 patients were obtained from previous publications, authoritative media reports, and public databases [34]. The images are preprocessed to obtain 2500 high-quality CT images. 2500 lung tumor CT images and 2500 normal lung CT images were obtained from general hospital of Ningxia medical university in China. Fig. 5 shows examples of lung CT images collected for this study.
Fig. 5

Lung CT images.

(2) Sample set partition. Sample_Lung refers to the Lung CT image sample set. Sample size is . According to the type of medical image (NormalLung, LungTumor, COVID), lung medical image sample set Sample_lung is divided into three sample subsets: Sample_NormalLung, Sample_LungTumor, and Sample_COVID. The sample sizes are . (3) Resize. Sample_Lung resize(Sample_Lung); (4) The training and test sample sets were constructed using the 5-fold crossover method in the three sample subsets Sample_NormalLung, Sample_LungTumor, and Sample_COVID. Using a partition algorithm, the subset of each samples were divided into 5 uniform parts, 500 samples per part, obtaining sample sets of 5 fold cross. (5) Individual classifiers were generated by pretraining the network via transfer learning. AlexNet, GoogleNet and ResNet are generated by pre_training the network with transfer learning while using these parameters in the pre_trained network as the initialization parameters of AlexNet, GoogleNet and ResNet respectively. (6) In the training sample set Sample_Lung_TrainingSet, Training AlexNet_Softmax, GoogleNet_Softmax, and ResNet_Softmax, respectively, Individual classifiers are obtained. (7) Ensemble ResNet_NRC classifier. Using the relative majority voting method to integrate three individual classifiers. Fig. 6 shows the algorithm flow chart.
Fig. 6

Algorithm flow chart of this model.

Algorithm flow chart of this model.

Experimental results and analysis

Experimental environment

Software environment: Windows 10 operating system, MatlabR2019a. Hardware environment: the hardware platform used for the simulation experiment was an Intel (R) Core (TM) i5-7200U CPU @ 2.50 GHz 2.70 GHz, 4.0 GB RAM, and 500 GB hard disk. COVID-19CT images from academic journals. COVID-19CT images from authoritative media reports.

Evaluation index

The measuring of the performances of the models, accuracy, sensitivity, specificity, F-score, Matthews correlation coefficient, is as follows: Accuracy is the most common evaluation index. The higher the accuracy, the better the classifier performance. The formulation is as follows: Sensitivity and specificity measure the classifier’s ability to recognize positive and negative examples, respectively. The larger the value, the higher the recognition performance. The formulation is as follows: F-score is the weighted harmonic average of recall rate and precision rate. It is used to weigh accuracy and recall rate. The formulation is as follows: MCC is a correlation coefficient that describes the actual classification and the predicted classification. It comprehensively considers true positives, true negatives, false positives, and false negatives. It is a more balanced indicator. Its value range is [−1,1], and the closer the value is to 1, the more accurate the prediction of the test subject. The formulation is as follows: where true positive (TP) represents the number of samples that are benign and correctly predicted, whereas true negative (TN) represents the malignant and correctly predicted number of samples. Also, false positive (FP) represents the number of samples that are actually malignant but incorrectly predicted while false negative (FN) represents the number of samples that are actually benign but incorrectly predicted.

Experimental data

2933 lung CT images of COVID-19 patients were obtained from previous publications, authoritative media reports, and public databases. Among them, 1752 cases were obtained from both domestic and foreign journals such as Sciencedirect, Nature,Springer Link, and China CNKI. 1012 cases were obtained from authoritative media reports such as the New York Times, Daily Mail (United Kingdom), The Times (United Kingdom), CNN, The Verge (United States), Avvenire (Italy), LaNuovaFerrara (Italy), People’s Daily, Toutiao News, and Dr. Lilac. 68 cases were obtained from the sirm.org public database. 101 cases were obtained from a GitHub public database (see Table 2, Table 3, Table 4).
Table 2

COVID-19CT images from academic journals.

TotalSciencedirectSpringer linkcnki.netOther
1752745135634238
Table 3

COVID-19CT images from authoritative media reports.

TotalDaily mailThe vergeLaNuovaFerraraPeople’s networkToutiao newsDoctor LilacVideo PPTOther
101272706922613620515678
Table 4

COVID-19 CT images from public databases.

Totalsirm.orgGitHub
16968101
The lung CT images of the COVID-19 patients obtained in this study were all from third-party platforms. Images from different platforms were different in size and format, and contained different degrees of noise, such as watermarks and mark instructions. The research directions were thus different. for example: some studies were statistical analyses of COVID-19 cases, others tracked and analyzed the same patient, others analyzed patients of different ages and genders. It is necessary the comparison of image characteristics for different clinical classifications. Hence, there are differences in data modalities, such as horizontal position or coronal position. CT images were preprocessed, for example, by deleting images with large noise and coronal position. The unified image format were .JPG. All images were normalized at the same time, converting the image to a size of 64 64. Finally, we obtained 2500 high quality CT images of the novel COVID-19. COVID-19 CT images from public databases.

Algorithm simulation experiment and analysis

Five-fold cross-validation was used for training. Each experimental result was averaged to obtain the final experimental result. The number of training samples per time was 2000 3 6000. The number of test samples was 500 3 1500. Five experiments were used to calculate the average value. The experiments were carried out on CT image data sets of normal lungs, lung tumors, and COVID-19. Identification and classification was performed using AlexNet-Softmax, GoogleNet-Softmax, and ResNet-Softmax, respectively. Then, ensemble deep learning model EDL-COVID was used for classification. Finally, the accuracy, sensitivity, specificity, F-Score, and Matthews correlation coefficient were used for evaluation.

Experiment one: AlexNet-Softmax classifier experiment

In first experiment, deep learning model uses AlexNet model, classification model uses Softmax, named AlexNet-Softmax. This experiment mainly discusses the recognition accuracy, training time and evaluation index of AlexNet_Softmax in training and recognition on the sample space of three different modalities (normal lung ct, lung tumor ct, COVID-19 ct). Classification​ accuracy and training time are shown in Table 5, classification evaluation index are shown in Table 6. In first 5-fold cross experiment of Table 5, we can seen that 473 samples are recognized correctly, 27 samples are recognized error under normal lung ct data set, 487 samples are recognized correctly, 13 samples are recognized error under lung tumor ct data set, 495 samples are recognized correctly, 5 samples are recognized error under COVID-19 ct data set. the average classification accuracy of AlexNet_Softmax is 98.16% of 5 times 5-fold cross experiment and running time is 354.47 s. Deviation of classification accuracy is 0.41432, standard deviation is 0.7196. that is to say, in 5 5-fold cross experiment, the classification accuracy has changed very little, the algorithm is not sensitive to samples, and has good stability and strong fault tolerance. Of course the model has a more faster detection speed and accuracy than nucleic acid reagent detection. In Table 6, average value of the sensitivity (SEN), specificity (SPE), F-Score, and Matthews correlation coefficient (MCC) are 98.16%, 99.36%, 97.3%, and 95.95%, respectively. The results showed that the model had the ability to recognize positives and negatives and have a good capability for correlation description between real and prediction classification ability.
Table 5

AlexNet_Softmax classification results.

Five_fold crossAccuracy (/%)Normal lung
Lung tumor
COVID-19
Time (/s)
Correctmis-cCorrectmis-cCorrectmis-c
Fold 197.0047327487134955342.92
Fold 298.474955484164982383.89
Fold 398.0748812486144973350.25
Fold 498.93494649374973347.70
Fold 598.334982480204973347.60
Average98.16244852243070248416354.47
Table 6

AlexNet_Softmax classification evaluation index.

Five_fold crossSEN (%)SPE (%)F (%)MCC (%)
Fold 197.3398.496.0994.13
Fold 298.4799.697.7496.62
Fold 397.6799.096.5994.88
Fold 498.9399.098.1297.17
Fold 599.0799.497.5596.32
Average98.1699.3697.395.95
AlexNet_Softmax classification results. AlexNet_Softmax classification evaluation index.

Experiment two: GoogleNet_Softmax classifier experiment

GoogleNet_Softmax is adopted in second experiment. This experiment discusses the recognition accuracy, training time and evaluation index of GoogleNet_Softmax when training and recognition ct image data sets of normal lungs, lung tumors and COVID-19. Classification accuracy and training time are shown in Table 7, classification evaluation index are shown in Table 8. In first 5_fold cross experiment of Table 7, we can seen that 471 samples are recognized correctly, 29 samples are recognized error under normal lung ct data set, 497 samples are recognized correctly, 3 samples are recognized error under lung tumor ct data set, 492 samples are recognized correctly, 8 samples are recognized error under COVID-19 ct data set. the average classification accuracy of AlexNet_Softmax is 98.25% of 5 times 5_fold cross experiment, and running time is 934.31 s. Deviation of classification accuracy is 0.4267, standard deviation is 0.7304, deviation of running time is 49.2489. Just like as AlexNet_Softmax, GoogleNet_Softmax is not sensitive to samples, and has good stability and strong fault tolerance. Table 8 shows the classification evaluation indicators. average value of sensitivity (SEN), specificity (SPE), F_Score, and Matthews correlation coefficient (MCC) are 98.25%, 99.2%, 97.43%, and 96.14%, respectively. The experiment results show that GoogleNet_Softmax have the ability to detect COVID-19 patient quickly and accurately under non-contact testing and treatment.
Table 7

GoogLeNet_Softmax classification result.

Five_Fold crossAccuracy (/%)Normal
Lung tumor
COVID-19
Time (/s)
Correctmis_cCorrectmis_cCorrectmis_c
Fold197.334712949734928934.31
Fold298.474928487134982937.04
Fold397.6748812482184955930.6
Fold498.734991487134955924.04
Fold599.074982488125000917.75
Average98.25244852244159248020928.74
Table 8

GoogLeNet_Softmax classification evaluation index.

Five_fold crossSEN (%)SPE (%)F (%)MCC (%)
Fold 197.3398.496.0994.13
Fold 298.4799.697.7496.62
Fold 397.6799.096.5994.88
Fold 498.7399.098.1297.17
Fold 599.0710098.6297.94
Average98.2599.297.4396.14
GoogLeNet_Softmax classification result.

Experiment three: ResNet_Softmax classifier experiment

The third experiment is ResNet_Softmax classifier experiment. Classification accuracy and training time are shown in Table 9, classification evaluation index are shown in Table 10 on three sample spaces (normal lung ct, lung tumor ct, COVID-19 ct). In first 5-fold cross experiment of Table 9, we can seen that 481 samples are recognized correctly, 19 samples are recognized error under normal lung ct data set, 495 samples are recognized correctly, 5 samples are recognized error under lung tumor ct data set, 494 samples are recognized correctly, 6 samples are recognized error under COVID-19 ct data set. the average classification accuracy of ResNet_Softmax is 98.56% of 5 times 5-fold cross experiment and running time is 990.58 s. Deviation of classification accuracy is 0.1075, standard deviation is 0.3667, deviation of running time is 383.5619. comparing with AlexNet and GoogleNet, Deviation of classification accuracy dropped by 74%, that is very high. That is to say that algorithm stability and fault tolerance of ResNet is more better than AlexNet and GoogleNet in the same classification accuracy. In Table 10, we can seen that the sensitivity (SEN), specificity (SPE), F_Score(F), and Matthews correlation coefficient (MCC) are 98.56%, 99.4%, 97.87%, and 96.81%, respectively. The results show that the ResNet_Softmaxmodel have a better generalization performance to recognize positives and negatives.
Table 9

ResNet_Softmax classification result.

Five_fold crossAccuracy (/%)Normal
Lung tumor
COVID-19
Time (/s)
Correctmis_cCorrectmis_cCorrectmis_c
Fold198.004811949554946998.46
Fold298.53490104901049821023.85
Fold398.534955483175000985.42
Fold499.00494649284991978.73
Fold598.73495549284946966.46
Average98.56245545245248248515990.58
Table 10

ResNet_Softmax classification evaluation index.

Five_fold crossSEN (%)SPE (%)F (%)MCC (%)
Fold 198.098.897.0595.57
Fold 298.5399.697.8497.76
Fold 398.5310097.8596.79
Fold 499.099.898.5297.78
Fold 598.7398.898.1197.17
Average98.5699.497.8796.81
In the three experiments, three different classification models were used, AlexNet_Softmax, GoogleNet_Softmax, and ResNet_Softmax. From comparing Tables 5, 7, and 9, we can observe that the classification accuracy of ResNet_Softmax improved 0.4% over AlexNet_Softmax. From the prospect of individual classifier, ResNet_Softmax is the best classifier to detect COVID-19 patient quickly, however, its speed is most slow under non-contact testing. The detect time is increased to from 354.47 s to 990.58 s. Time and accuracy are both contradictory and unified. In the actual application process, overall consideration should be given. From comparing Tables 6, 8, and 10, we can observe that the sensitivity (SEN), specificity (SPE), F_Score(F), and Matthews correlation coefficient (MCC) of ResNet_Softmax increased to 0.4%, 0.4%, 0.04%, 0.57%, and 0.86% over than AlexNet_Softmax, respectively. Specificity and sensitivity to negative classes have been improved. It can be see that the deeper layers in the network, richer image feature extraction, and higher classification accuracy, increased training time significantly. GoogLeNet_Softmax classification evaluation index. ResNet_Softmax classification result. ResNet_Softmax classification evaluation index. EDL_COVID classification result.

Experiment four: EDL_COVID classifier experiment

In this experiment, individual classfier are AlexNet, Google_Net and ResNet, softmax is used as the classification algorithm of the fully connected layer. The ensemble classifier EDL-COVID is obtained via relative majority voting. In order to illustrate EDL-COVID performance, sensitivity, specificity, F_Score, and Matthew correlation coefficient are used to evaluate the algorithm. Classification results and evaluation index are shown in Table 11, Table 12, respectively.
Table 11

EDL_COVID classification result.

Five_fold crossAccuracy (/%)Normal
Lung tumor
COVID-19
Time (/s)
Correctmis_cCorrectmis_cCorrectmis_c
Fold198.5348614496449642275.81
Fold299.074964492849822234.81
Fold398.9349734891149822266.31
Fold499.2750004901049912250.53
Fold599.475000493749912231.85
Average99.0542479212460402490102251.86
Table 12

EDL_COVID classification evaluationindex.

Five_fold crossSEN (%)SPE (%)F (%)MCC (%)
Fold 198.5399.297.8396.74
Fold 299.0799.698.6197.92
Fold 398.9399.698.4297.63
Fold 499.2799.898.9198.37
Fold 599.4799.899.298.81
Average99.0599.698.5997.89
Fig. 7 shows the average value of five indexes, which indicates the differences of different algorithms on various index.
Fig. 7

Five evaluation indicators of different models.

Table 11 shows the experimental accuracy after the five_fold cross_validation and the final average. The average classification accuracy of the EDL_COVID is 99.054%, and running time is 2251.86 s. Deviation of classification accuracy is 0.1019, standard deviation is 0.357, deviation of running time is 295.0553. Deviation of classification accuracy of EDL_COVID model is same with ResNet_Softmax, and deviation of running time tends to be stable. Table 12 shows the classification evaluation index. The sensitivity(SEN), specificity(SPE), F_Score(F), and Matthews correlation coefficient (MCC) are 99.05%, 99.6%, 98.59%, and 97.89%, respectively. According to our experimental results on the three individual classifiers performance, accuracy, sensitivity, specificity, F_score, and MCC of the ResNet_Softmax are the highest, but the running time is also the highest. AlexNet_Softmax is the least time_consuming, with an average time of 354.475 s. The accuracy, sensitivity, specificity, F_Score, and MCC of AlexNet_Softmax are the lowest. Comparing with the single classifier, the classification accuracy of the EDL_COVID model is higher than the AlexNet_Softmax, GoogleNet_Softmax, and ResNet_Softmax, and classification accuracy i improved to 0.89%, 0.80%, and 0.49%, respectively. The training time is increased to 1897.39, 1323.12, and 1261.28 s, respectively. It can be seen that the classification accuracy of the EDL_COVID model is better than a single classifier. Deep learning classifiers, such as AlexNet_Softmax, GoogleNet_Softmax, and ResNet_Softmax, have a faster detection speed than nucleic acid reagent detection with a higher detection accuracy. Ensemble learning such as EDL_COVID can improve the classification accuracy over individual classifier. Five evaluation indicators of different models. EDL_COVID classification evaluationindex.

Summary

Because the COVID-19 is highly contagious, its transmission route is difficult to effectively control, it is vital to detect viruses quickly and accurately to prevent propagation and provide timely treatment for the disease. Computer tomography (CT) detection of COVID-19 possesses high sensitivity, low misdiagnosis rate, and high commercial availability.hence, Artificial intelligence, esp deep learning based on CT are used to detect COVID-19 patient. That is a good approach. In this paper, We proposed an ensemble deep learning model (EDL_COVID) based on COVID_19 lung CT images to rapidly detect the novel coronavirus COVID-19. 2500 CT images of COVID-19 lungs were obtained from previous publications, news reports, public databases, and other channels. 2500 CT images of lung tumors and normal lung were obtained from three grade A hospitals in Ningxia, China. Transfer learning was used to pretrain three deep convolutional neural network models, namely, AlexNet, GoogleNet, and ResNet, and initialization parameters were obtained. Using softmax as the classification algorithm of the fully connected layer, three component classifiers, AlexNet_Softmax, GoogleNet_Softmax, and ResNet_Softmax were constructed. The ensemble classifier EDL_COVID was obtained by the method of relative majority vote algorithm. Our results showed that the overall classification performance of our EDL_COVID model was better than a single individual classifier with the fastest detection speed of 342.92 s and an accuracy of 97%, the ensemble accuracy can thus reach 99.05%. Evaluation indexes such as specificity and sensitivity were also high, outlining its potential use for the rapid detection of COVID-19.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  19 in total

1.  Clinical Characteristics of 138 Hospitalized Patients With 2019 Novel Coronavirus-Infected Pneumonia in Wuhan, China.

Authors:  Dawei Wang; Bo Hu; Chang Hu; Fangfang Zhu; Xing Liu; Jing Zhang; Binbin Wang; Hui Xiang; Zhenshun Cheng; Yong Xiong; Yan Zhao; Yirong Li; Xinghuan Wang; Zhiyong Peng
Journal:  JAMA       Date:  2020-03-17       Impact factor: 56.272

2.  Emerging 2019 Novel Coronavirus (2019-nCoV) Pneumonia.

Authors:  Fengxiang Song; Nannan Shi; Fei Shan; Zhiyong Zhang; Jie Shen; Hongzhou Lu; Yun Ling; Yebin Jiang; Yuxin Shi
Journal:  Radiology       Date:  2020-02-06       Impact factor: 11.105

3.  Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases.

Authors:  Tao Ai; Zhenlu Yang; Hongyan Hou; Chenao Zhan; Chong Chen; Wenzhi Lv; Qian Tao; Ziyong Sun; Liming Xia
Journal:  Radiology       Date:  2020-02-26       Impact factor: 11.105

4.  CT Imaging Features of 2019 Novel Coronavirus (2019-nCoV).

Authors:  Michael Chung; Adam Bernheim; Xueyan Mei; Ning Zhang; Mingqian Huang; Xianjun Zeng; Jiufa Cui; Wenjian Xu; Yang Yang; Zahi A Fayad; Adam Jacobi; Kunwei Li; Shaolin Li; Hong Shan
Journal:  Radiology       Date:  2020-02-04       Impact factor: 11.105

5.  Diagnosis of the Coronavirus disease (COVID-19): rRT-PCR or CT?

Authors:  Chunqin Long; Huaxiang Xu; Qinglin Shen; Xianghai Zhang; Bing Fan; Chuanhong Wang; Bingliang Zeng; Zicong Li; Xiaofen Li; Honglu Li
Journal:  Eur J Radiol       Date:  2020-03-25       Impact factor: 3.528

6.  Clinical and CT imaging features of the COVID-19 pneumonia: Focus on pregnant women and children.

Authors:  Huanhuan Liu; Fang Liu; Jinning Li; Tingting Zhang; Dengbin Wang; Weishun Lan
Journal:  J Infect       Date:  2020-03-21       Impact factor: 6.072

7.  Ultra-high-resolution computed tomography can demonstrate alveolar collapse in novel coronavirus (COVID-19) pneumonia.

Authors:  Tae Iwasawa; Midori Sato; Takafumi Yamaya; Yozo Sato; Yoshinori Uchida; Hideya Kitamura; Eri Hagiwara; Shigeru Komatsu; Daisuke Utsunomiya; Takashi Ogura
Journal:  Jpn J Radiol       Date:  2020-03-31       Impact factor: 2.374

8.  Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy.

Authors:  Lin Li; Lixin Qin; Zeguo Xu; Youbing Yin; Xin Wang; Bin Kong; Junjie Bai; Yi Lu; Zhenghan Fang; Qi Song; Kunlin Cao; Daliang Liu; Guisheng Wang; Qizhong Xu; Xisheng Fang; Shiqin Zhang; Juan Xia; Jun Xia
Journal:  Radiology       Date:  2020-03-19       Impact factor: 11.105

9.  Proposal of a low-dose, long-pitch, dual-source chest CT protocol on third-generation dual-source CT using a tin filter for spectral shaping at 100 kVp for CoronaVirus Disease 2019 (COVID-19) patients: a feasibility study.

Authors:  Andrea Agostini; Chiara Floridi; Alessandra Borgheresi; Myriam Badaloni; Paolo Esposto Pirani; Filippo Terilli; Letizia Ottaviani; Andrea Giovagnoni
Journal:  Radiol Med       Date:  2020-04-01       Impact factor: 6.313

Review 10.  Chest CT manifestations of new coronavirus disease 2019 (COVID-19): a pictorial review.

Authors:  Zheng Ye; Yun Zhang; Yi Wang; Zixiang Huang; Bin Song
Journal:  Eur Radiol       Date:  2020-03-19       Impact factor: 7.034

View more
  45 in total

1.  ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration.

Authors:  Omneya Attallah
Journal:  Comput Biol Med       Date:  2022-01-05       Impact factor: 4.589

2.  Use of deep learning to predict postoperative recurrence of lung adenocarcinoma from preoperative CT.

Authors:  Yuki Sasaki; Yohan Kondo; Tadashi Aoki; Naoya Koizumi; Toshiro Ozaki; Hiroshi Seki
Journal:  Int J Comput Assist Radiol Surg       Date:  2022-06-28       Impact factor: 3.421

Review 3.  Pooling Operations in Deep Learning: From "Invariable" to "Variable".

Authors:  Zhou Tao; Chang XiaoYu; Lu HuiLing; Ye XinYu; Liu YunCan; Zheng XiaoMin
Journal:  Biomed Res Int       Date:  2022-06-20       Impact factor: 3.246

4.  Covid-19 Imaging Tools: How Big Data is Big?

Authors:  K C Santosh; Sourodip Ghosh
Journal:  J Med Syst       Date:  2021-06-03       Impact factor: 4.460

5.  Novel deep transfer learning model for COVID-19 patient detection using X-ray chest images.

Authors:  N Kumar; M Gupta; D Gupta; S Tiwari
Journal:  J Ambient Intell Humaniz Comput       Date:  2021-05-15

6.  Computer-Aided Diagnosis Research of a Lung Tumor Based on a Deep Convolutional Neural Network and Global Features.

Authors:  Huiling Lu
Journal:  Biomed Res Int       Date:  2021-02-28       Impact factor: 3.411

7.  Can laboratory parameters be an alternative to CT and RT-PCR in the diagnosis of COVID-19? A machine learning approach.

Authors:  Mehmet Kalaycı; Hakan Ayyıldız; Seda Arslan Tuncer; Pinar Gundogan Bozdag; Gulden Eser Karlidag
Journal:  Int J Imaging Syst Technol       Date:  2022-01-22       Impact factor: 2.177

8.  18F-FDG-PET/CT Whole-Body Imaging Lung Tumor Diagnostic Model: An Ensemble E-ResNet-NRC with Divided Sample Space.

Authors:  Zhou Tao; Huo Bing-Qiang; Lu Huiling; Shi Hongbin; Yang Pengfei; Ding Hongsheng
Journal:  Biomed Res Int       Date:  2021-04-01       Impact factor: 3.411

9.  A new approach for computer-aided detection of coronavirus (COVID-19) from CT and X-ray images using machine learning methods.

Authors:  Ahmet Saygılı
Journal:  Appl Soft Comput       Date:  2021-03-17       Impact factor: 6.725

10.  COVID-CT-Mask-Net: prediction of COVID-19 from CT scans using regional features.

Authors:  Aram Ter-Sarkisov
Journal:  Appl Intell (Dordr)       Date:  2022-01-08       Impact factor: 5.019

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.