| Literature DB >> 32895587 |
Bejoy Abraham1, Madhu S Nair2.
Abstract
Corona virus disease-2019 (COVID-19) is a pandemic caused by novel coronavirus. COVID-19 is spreading rapidly throughout the world. The gold standard for diagnosing COVID-19 is reverse transcription-polymerase chain reaction (RT-PCR) test. However, the facility for RT-PCR test is limited, which causes early diagnosis of the disease difficult. Easily available modalities like X-ray can be used to detect specific symptoms associated with COVID-19. Pre-trained convolutional neural networks are widely used for computer-aided detection of diseases from smaller datasets. This paper investigates the effectiveness of multi-CNN, a combination of several pre-trained CNNs, for the automated detection of COVID-19 from X-ray images. The method uses a combination of features extracted from multi-CNN with correlation based feature selection (CFS) technique and Bayesnet classifier for the prediction of COVID-19. The method was tested using two public datasets and achieved promising results on both the datasets. In the first dataset consisting of 453 COVID-19 images and 497 non-COVID images, the method achieved an AUC of 0.963 and an accuracy of 91.16%. In the second dataset consisting of 71 COVID-19 images and 7 non-COVID images, the method achieved an AUC of 0.911 and an accuracy of 97.44%. The experiments performed in this study proved the effectiveness of pre-trained multi-CNN over single CNN in the detection of COVID-19.Entities:
Keywords: Bayesnet; CNN; COVID-19; Multi-CNN; X-ray
Year: 2020 PMID: 32895587 PMCID: PMC7467028 DOI: 10.1016/j.bbe.2020.08.005
Source DB: PubMed Journal: Biocybern Biomed Eng ISSN: 0208-5216 Impact factor: 4.314
Fig. 1The four X-ray images in first row corresponds to COVID-19 and the images in second row corresponds to non-COVID.
Fig. 2Sample activations. Each image contains sixteen tiles corresponding to sample activations of the original image.
Fig. 3Architecture of the proposed method.
Results achieved using various datasets.
| Dataset | Precision | Recall | F-measure | AUC | Accuracy |
|---|---|---|---|---|---|
| Dataset-1 | 0.853 | 0.985 | 0.914 | 0.963 | 91.1579 |
| Dataset-2 | 0.986 | 0.986 | 0.986 | 0.911 | 97.4359 |
Fig. 4Confusion matrices corresponding to Dataset-1 and Dataset2. The bottom-most diagonal elements indicated in yellow colour represent accuracy. Elements in right most columns represent recall and bottom most rows represent precision.
Fig. 5ROC curves corresponding to results achieved in Dataset-1 and Dataset-2.
Parameter settings of CFS.
| Subset size evaluator | Kernel | Number of CV folds | Seed |
|---|---|---|---|
| SMO | Polynomial | 5 | 3 |
Parameter settings of Bayesnet.
| Estimator | Alpha | Search algorithm |
|---|---|---|
| Simple estimator | 0.3 | K2 hill climbing algorithm |
Comparison of results achieved using various search algorithms in combination with proposed feature selection technique. Results obtained using the best performing algorithm is indicated in bold.
| Search algorithm | Precision | Recall | F-measure | AUC | Accuracy |
|---|---|---|---|---|---|
| Best first | 0.817 | 0.998 | 0.899 | 0.912 | 89.2632 |
| Greedy stepwise | 0.817 | 0.998 | 0.899 | 0.913 | 89.2632 |
Comparison of results achieved using various pre-trained networks in combination with proposed classifier. Best results achieved using multi-CNN is indicated in bold.
| Network | Precision | Recall | F-measure | AUC | Accuracy |
|---|---|---|---|---|---|
| Darknet-53+MobilenetV2+Resnet-101+NasnetLarge+Xception | 0.839 | 0.969 | 0.9 | 0.959 | 89.6842 |
| Shufflenet+Darknet-53+Mobilenet+Resnet-101+NasnetLarge | 0.826 | 0.974 | 0.894 | 0.952 | 88.9474 |
| Resnet-101+NasnetLarge+Xception+VGG-19+Squeezenet | 0.833 | 0.98 | 0.901 | 0.952 | 89.6842 |
| Densenet-201+InceptionResnetV2+Shufflenet+Darknet-53+MobilenetV2 | 0.846 | 0.967 | 0.902 | 0.952 | 90 |
| Densenet+InceptionResnetV2+Shufflenet+Darknet-53 | 0.837 | 0.951 | 0.89 | 0.944 | 88.8421 |
| Squeezenet+Darknet-53+MobilenetV2+Xception | 0.842 | 0.985 | 0.907 | 0.962 | 90.4211 |
| Squeezenet+Darknet-53+Shufflenet+Xception | 0.837 | 0.987 | 0.906 | 0.955 | 90.2105 |
| InceptionResnetV2+Shufflenet+Darknet-53+MobilenetV2 | 0.846 | 0.969 | 0.903 | 0.951 | 90.1053 |
| Squeezenet+Darknet-53+Shufflenet | 0.815 | 0.971 | 0.886 | 0.943 | 88.1053 |
| Squeezenet+Darknet-53+MobilenetV2 | 0.826 | 0.974 | 0.894 | 0.948 | 88.9474 |
| Darknet+InceptionResnetV2+MobilenetV2 | 0.847 | 0.951 | 0.896 | 0.954 | 89.4737 |
| Densenet-201+Darknet-53 | 0.815 | 0.98 | 0.89 | 0.931 | 88.4211 |
| Darknet-53+InceptionResnetV2 | 0.84 | 0.929 | 0.883 | 0.94 | 88.2105 |
| Squeezenet+Shufflenet | 0.809 | 0.971 | 0.883 | 0.933 | 87.6842 |
| Densenet-201 | 0.809 | 0.947 | 0.873 | 0.924 | 86.8421 |
| Darknet-53 | 0.8 | 0.956 | 0.871 | 0.919 | 86.5263 |
| InceptionResnetV2 | 0.806 | 0.96 | 0.876 | 0.918 | 87.0526 |
| MobilenetV2 | 0.83 | 0.956 | 0.888 | 0.942 | 88.5263 |
| NasnetLarge | 0.819 | 0.936 | 0.873 | 0.906 | 87.0526 |
| Resnet-101 | 0.814 | 0.958 | 0.88 | 0.925 | 87.5789 |
| Shufflenet | 0.813 | 0.971 | 0.885 | 0.925 | 88 |
| Squeezenet | 0.803 | 0.934 | 0.863 | 0.908 | 85.8947 |
| VGG-19 | 0.816 | 0.949 | 0.878 | 0.914 | 87.3684 |
| Xception | 0.817 | 0.985 | 0.893 | 0.91 | 88.7368 |
Comparison of results achieved using various classifiers in combination with the proposed network. Results achieved using proposed classifier is shown in bold.
| Classifier | Precision | Recall | F-measure | AUC | Accuracy |
|---|---|---|---|---|---|
| NaiveBayes | 0.83 | 0.989 | 0.902 | 0.947 | 89.7895 |
| SVM | 0.822 | 0.989 | 0.898 | 0.897 | 89.2632 |
| LogisticRegresion | 0.846 | 0.909 | 0.877 | 0.941 | 87.7895 |
| AdaBoostM1 | 0.81 | 0.998 | 0.894 | 0.898 | 88.7368 |
| Random Forest | 0.828 | 0.987 | 0.9 | 0.94 | 89.5789 |
| ADTree | 0.828 | 0.938 | 0.88 | 0.922 | 87.7895 |
| NBTree | 0.84 | 0.96 | 0.896 | 0.949 | 89.3684 |
Number of images and validation techniques used by the various state-of-the-art methods.
| Method | Number of images | Validation |
|---|---|---|
| 231 COVID-19 vs. 500 no-findings | 10-fold CV | |
| 142 COVID-19 vs. 142 normal | 70% data for training and 30% testing | |
| 50 COVID-19 vs. 50 normal | 5-fold CV | |
| 250 COVID-19 vs. 250 non-COVID | 10-fold CV | |
| Training: 250 COVID-19 vs. 250 non-COVID | Testing: 74 COVID-19 vs. 36 non-COVID | |
| 100 COVID-19 vs. 1431 non-COVID | 2-fold CV | |
| 25 COVID-19 vs. 25 non-COVID | 80% data for training and 20% testing |
Number of images and validation techniques used by the proposed method are indicated in bold.
Results reported by various state-of-the-art methods. Results achieved using the proposed method is indicated in bold.
| Method | Precision | Recall | F-measure | AUC | Accuracy |
|---|---|---|---|---|---|
| 0.916 | 0.96 | 0.938 | – | 91.24 | |
| – | 0.972 | – | – | – | |
| – | 0.96 | 0.98 | – | 98 | |
| 0.81 | 0.78 | – | 0.89 | – | |
| 0.89 | 0.80 | – | 0.81 | – | |
| – | 0.96 | – | 0.95 | – | |
| 0.83 | 1.00 | 0.91 | – | – |