Literature DB >> 33935377

Medical image-based detection of COVID-19 using Deep Convolution Neural Networks.

Loveleen Gaur1, Ujwal Bhatia1, N Z Jhanjhi2, Ghulam Muhammad3,4, Mehedi Masud5.   

Abstract

The demand for automatic detection of Novel Coronavirus or COVID-19 is increasing across the globe. The exponential rise in cases burdens healthcare facilities, and a vast amount of multimedia healthcare data is being explored to find a solution. This study presents a practical solution to detect COVID-19 from chest X-rays while distinguishing those from normal and impacted by Viral Pneumonia via Deep Convolution Neural Networks (CNN). In this study, three pre-trained CNN models (EfficientNetB0, VGG16, and InceptionV3) are evaluated through transfer learning. The rationale for selecting these specific models is their balance of accuracy and efficiency with fewer parameters suitable for mobile applications. The dataset used for the study is publicly available and compiled from different sources. This study uses deep learning techniques and performance metrics (accuracy, recall, specificity, precision, and F1 scores). The results show that the proposed approach produced a high-quality model, with an overall accuracy of 92.93%, COVID-19, a sensitivity of 94.79%. The work indicates a definite possibility to implement computer vision design to enable effective detection and screening measures.
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.

Entities:  

Keywords:  COVID-19; Chest X-rays; Computer vision; Deep CNN; Deep learning; Transfer learning

Year:  2021        PMID: 33935377      PMCID: PMC8079233          DOI: 10.1007/s00530-021-00794-6

Source DB:  PubMed          Journal:  Multimed Syst        ISSN: 0942-4962            Impact factor:   2.603


Introduction

With the advent of the COVID-19 pandemic, a massive amount of multimedia healthcare data is generated. The analysis of this data is critical for a technology-driven solution. To process massive data for disease diagnosis, machine learning (ML) and deep learning (DL) techniques have exhibited noticeable performance. Many applications have been developed using ML and DL techniques over the conventional computer-aided systems in disease diagnosis. Deep learning models are mainly used when there is a huge medical dataset and automatically extracting features from the images for developing a prediction and detection model. DL methods greatly lessen the comprehensive data engineering and feature extraction process. Particularly, deep learning techniques have shown significant potential to detect Lung-based abnormalities by processing chest X-rays [1, 2]. Effective detection and screening measures, along with proper and speedy medical action, are the need of the hour. The Reverse Transcription Polymerase chain reaction (RT-PCR) test is a useful screening technique for the COVID-19. This method is complicated and time-consuming, with an accuracy of about 63% [3]. Thus, with complex manual testing procedures and a shortage of testing kits, the infected are interacting with the healthy world-over leading to an exponential rise in active cases [4]. The medical symptoms of severe COVID-19 infection are bronchopneumonia, causing fever, cough, dyspnoea, and pneumonia [4-7]. The similarity in visual aesthetics of chest X-rays of COVID-19 patients with Viral Pneumonia [8-11] can sometimes lead to misdiagnosis of the disease. There have also been instances of misdiagnosis of chest X-rays by radiologists. There is a similarity in visual aesthetics of chest X-rays of COVID-19 patients [12] with those of Viral Pneumonia. This study is of significance as the transfer learning models, namely EfficientNetB0, InceptionV3. and VGG16, have been proven suitable for practical implementation due to their balance of accuracy and efficiency with fewer parameters suitable for mobile networked applications as a means to detect COVID-19 [56]. This study provides substantial evidence that computer vision technology can be a path to achieve better accuracy with lower human intervention to screen COVID-19 disease. The rest of the article is structured as follows. Section 2 presents the literature review; Sect. 3 describes the methodology, including data sets, databases, model selection and pre-processing, etc. Section 4 provides a performance evaluation and discussion. Section 5 concludes the paper with findings and further research.

Literature of review

Recent developments in deep learning have been seen over the years in many fields such as big multimedia data, business analytics for medical multimedia research, and finally, managing media-related healthcare data analytics [13–18, 54]. Computer-aided diagnosis (CAD) for lung diseases has been a part of medical research for nearly half a century. It was based on simple rule-based algorithms for prediction but has now developed into ML via deep neural networks [10, 19–21, 53]. Recent times have made CAD in lung disorder analysis imperative due to the extreme workload on radiologists [22]. Convolution networks can now extract features from images hidden from the naked eye [23-26]. This technique of Deep learning is widely acknowledged and utilized for research [14, 27–29]. In medical image analysis, the application of CNN was established by [30] to enhance low light images. They used it to identify the nature of the disease through CT and chest X-ray images. CNN has also proven to be reliable in feature extraction and learning by image recognition from endoscopic videos. For Chest X-ray analysis, CNN has gathered interest as it is low in cost with an abundance of training data for computer vision models. For classification, Rajkomar et al. applied GoogleLeNet with data augmentation and pre-training on ImageNet to classify chest X-rays with 100 percent accuracy [31]. This is essential evidence of deep learning applications in clinical image classification. Transfer learning through pre-trained models was implemented by Vikash et al. [32] in a study for Pneumonia detection. Classification models for lung mapping and abnormality detection were built through a customized VGG16 transfer learning model [33]. Studies by training CNN models on a large training set were performed by Wang et al. [34] and with data augmentation by Ronneburger et al. [35]. Accurate detection of 14 different diseases by feature extraction techniques through Deep Learning CNN models was reported [36]. Sundaram et al. [37] achieved an AUC of 0.9 by transfer learning techniques through AleXNet and GoogleLeNet for Lung disease detection. A ResNet50 model [38] delivered an outcome with 96.2% accuracy. The inception V3 model has been successfully used to achieve the classification of Bacterial and Viral Pneumonia impacted chest X-rays (CXRs) from those which are normal with an AUC of 0.940 [39]. In a different. An attempt was made to screen and identify the disorder in chest X-rays with an area under a curve of 0.633 [40]. A gradient visualization technique was used to localize ROI with heatmaps for lung disease detection. A 121-layer deep neural network achieved an area under a curve of 0.768 for pneumonia identification [41]. Philipsen et al. [42] experimented on the performance of T.B. detection based on computerized chest and reported an AUC value of 0.93. Bharathi et al.[43] proposed a successful hybrid deep learning framework called “VDSNet” for time-efficient lung disease diagnosis through machine learning. Yoo et al. [44] proposed a prediction of COVID-19 based on a deep learning-based decision tree for fast decision making. The study reported an accuracy of 98%, 80%, and 95% for three decision trees. Process of computer vision-enabled classification A comparative analysis of the study is tabulated in Table 1.
Table 1

Comparative analysis of the study

ReferencesYearTechniqueFindingsResults
Das et al. [49]2020Deep learningTruncated inception net model via transfer learning outperforms

Accuracy = 99.92%

AUC = 1

Joseph Paul Cohen et al. [50]2020Deep-learning and RegressionThe reported model shows an ability to gauge the severity of COVID-19 lung infections from chest imaging

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${R}^{2}$$\end{document}R2= 0.58 ± 0.09

MAE = 0.78 ± 0.05

MSE = 0.86 ± 0.11

Linda Wang et al. [55]2020Deep learningCOVID-Net shows the potential of real-world implementation

Accuracy = 92.6%

Sensitivity = 87.1%

Yujin Oh et al. [51]2020Deep learningPatch-based deep neural network architecture is stable for a small dataset

Accuracy = 88.9%

Sensitivity = 85.9%

Sivaramakrishnan Rajaraman et al. [52]2018Deep learningcustomized VGG16 model demonstrates promising performance

Accuracy = 91.7%

Sensitivity = 90.5%

Comparative analysis of the study Accuracy = 99.92% AUC = 1 = 0.58 ± 0.09 MAE = 0.78 ± 0.05 MSE = 0.86 ± 0.11 Accuracy = 92.6% Sensitivity = 87.1% Accuracy = 88.9% Sensitivity = 85.9% Accuracy = 91.7% Sensitivity = 90.5%

Methodology

The process of medical image-based COVID-19 detection CNN-based classification model is shown in Fig. 1. A deep convolution neural network model’s classification capability is based on the amount and quality of data available for training. The amount of data, when sufficiently large, is observed to outperform the models trained on a smaller set. Utilizing pre-trained weights by transfer learning is a method wherein a model previously trained on a more extensive training set is used on a relatively small one with modifications as required. This benefits in reducing the time of training the model as it would not be done from scratch. This also reduces the load on the system’s hardware being used and can be done on general-purpose computers like the one used in this work. Transfer learning was achieved by using the Tensor flow library. Post loading the respective model, learning weights were modified to the suitability of the present dataset.
Fig. 1

Process of computer vision-enabled classification

The details of the dataset used, details of model selection and process of model architecture are mentioned in the following sections.

Description of dataset

This study has used a posterior-to-anterior view of chest X-ray images. Figure 2 demonstrates some sample X-ray images of different classes. This view is most commonly referred to by radiologists in the detection of pneumonia.
Fig. 2

COVID-19, Normal and viral pneumonia X-ray images

COVID-19, Normal and viral pneumonia X-ray images There are two broad subsections from where images have been sourced, details of which are as follows.

COVID-19 radiography database

M.E.H. Chowdhury, et al. [45], in their research “Can AI help in screening Viral and COVID-19 pneumonia?” collected chest X-ray images of positive COVID-19 patients along with normal and those suffering from Viral Pneumonia which is available for public use in Kaggle.com.

Actualmed-COVID-chest X ray-dataset

Medical data compiled by Actualmed, and José Antonio Heredia Álvaro and Pau Agustí Ballester of Universitat Jaume I (UJI) for research [46]. The training of models is done on 3106 images, 0.16 of which used for validation. Testing of three different algorithms was done on 806 non-augmented images of different categories to evaluate each algorithm’s performance. The details of the splitting of the dataset are illustrated in the Table 2.
Table 2

Details of training and test set

DatabaseTypeNo. of X-ray images3 Class classification model—data split
TrainingTestingTotal
Actualmed-COVID19-CXR datasetCOVID-19201300120420
COVID -19 RADIOGRAPHY DATABASECOVID-19219
Normal134110003411341
Viral Pneumonia134510003451345
Details of training and test set

Model selection and pre-processing

The models selected for the research were due to their significance. Based on the assumption that better accuracy and efficiency can be achieved by setting the balance between all networks, EfficientNetB0 has been suggested. EfficientNetB0 surpasses CNN in gaining better accuracy while significantly reducing the number of parameters, as shown in Fig. 3 [47].
Fig. 3

EfficientNetB0 architecture

EfficientNetB0 architecture VGG-16 [48], shown in Fig. 4, developed in 2014, is a popular model already trained in image classification.
Fig. 4

VGG16 model [48]

VGG16 model [48] The InceptionV3 [49] network, developed in 2015, as shown in Fig. 5 [49]. The main idea is to install modules using a few weights. InceptionV3 costs are suitable for mobile applications and big data.
Fig. 5

Inception V3 model

Inception V3 model Image pre-processing is done to resize the X-ray images to have standard input. As per the model requirement, the images are resized to and 224 × 224 pixels and were normalized according to the pre-trained model standards. The chest X-rays were subjected to augmentation before training by rotation, scaling, and translation, including nearest neighbor fill techniques, as shown in Fig. 6.
Fig. 6

Plot of augmented horizontal flip

Plot of augmented horizontal flip

Process of model architecture

The following steps were incorporated to implement the classification model. The architecture depicted in Fig. 7 was incorporated by training some layers and keeping others frozen to finetune the model. In the CNN model, the layers at the bottom refer to features that do not depend upon the classification problem, whereas layers at the top refer to the problem-dependent features. Steps 3, 4, and 5 are frozen, and the final layers are unfrozen post feature transfer. This unfrozen, fully connected layer is the network head and responsible for classification. Backpropagation and weight decay were used to reduce the over-fitting in the models. The total no. of epochs for training is 25 with a batch size of 18. The base learning rate is chosen to be 0.00001.
Fig. 7

Steps to implement the model

Steps to implement the model

Results and discussion

In this section, we present the multi-classification results followed by a brief discussion of the results given by each model. A confusion matrix is used to check how well a model can perform for new data. Following Eqs. (1) to (5) shows the formulae for different performance metric to measure the performance of binary classification models. The results by VGG16 (Table 3) indicate that Normal CXR’s were detected with reasonable sensitivity (89%) due to low false negatives. The precision and specificity (91.01% and 93%) with an accuracy of is 91.8% is reported. Viral Pneumonia (Table 3) is reported within acceptable values. COVID-19 class (Table 3) is reported with good specificity (90%) but low precision (68%). It is observed, the accuracy is 82.34% (Fig. 8a).
Table 3

Evaluation for VGG16

ModelCategoryAccuracySensitivity (recall)SpecificityPrecision PPVF1 scores
VGG16COVID-190.82340.680.90.690.72
Normal0.91810.890.930.910.90
Viral pneumonia0.91690.900.920.910.90
Fig. 8

Confusion matrix of VGG16 (a), Inceptionv3 (b), EficientNetB0 (c)

Evaluation for VGG16 Confusion matrix of VGG16 (a), Inceptionv3 (b), EficientNetB0 (c) The results by InceptionV3 (Table 4) indicate that Normal CXR’s were detected with good sensitivity (93%). Better precision and specificity (95% and 94%) with accuracy is 94.42%. Viral Pneumonia (Table 4) is reported with an accuracy of 94%. COVID-19 class (Table 4) is reported with better specificity (95%) and acceptable precision (77%). It is observed, the accuracy is 93.38% (Fig. 8b).
Table 4

Evaluation for Inceptionv3

ModelCategoryAccuracySensitivity (recall)SpecificityPrecision PPVF1 scores
InceptionV3COVID-190.93380.810.950.770.79
Normal0.94420.930.950.940.93
Viral pneumonia0.940.9330.9540.940.94
Evaluation for Inceptionv3 The results by EfficientNetB0 (Table 5) indicate that Normal CXR’s were detected with very good sensitivity (94%). The highest precision and specificity (95% and 96.53%) with an accuracy of is 95.53% is reported. Viral Pneumonia (Table 5) is reported with an accuracy of 95%. COVID-19 class (Table 5) is reported with high specificity (96%) and reasonable precision (79%). It is observed, the accuracy is 94.79% (Fig. 8c).
Table 5

Evaluation for EfficientNetB0

ModelCategoryAccuracySensitivity (recall)SpecificityPrecision PPVF1 scores
EfficientNetB0COVID-190.94790.850.960.790.82
Normal0.95530.940.96530.950.95
Viral pneumonia0.950.9410.9650.950.95
Evaluation for EfficientNetB0 Table 6 shows the description of the overall performance parameters of the three classification models. The results are observed to be the best for EficientNetB0.
Table 6

Overall performance parameters

CategoryAccuracySensitivity (recall)SpecificityPrecision PPVF1 scores
VGG1687.840.82330.9120.820.84
InceptionV391.320.890.940.87540.878
EfficientNetB092.930.900.950.8830.88
Overall performance parameters It is observed that the main cause of misclassification of COVID-19 as normal was due to less opacity in the left and right upper lobe and suprahilar on posterior-to-anterior x-ray images, which is very similar to normal x-ray images.

Conclusion and future scope

The COVID-19 pandemic has clearly put a threat to human existence. Efforts leading to curb the spread of the disease are observed to burden the healthcare sector. Testing measures to detect the presence are costly and may be insufficient to reach a wider population. Deep learning methods have proven to be an essential aid to screen big data with greater accuracy. This study aimed to provide evidence on the successful application of deep learning techniques to help detect the presence of COVID-19 infection. The results of this study confirm that deep CNN computer vision models are capable of practical implementation in the healthcare sector to screen and detect the presence of COVID-19 from chest X-rays. Transfer learning techniques have proven beneficial in enhancing the learning capabilities of the model. The EfficientNetB0 model reported the highest accuracy of 94.79% in detecting and classifying COVID-19 chest X-rays from other categories of chest abnormalities and an overall accuracy 0f 92.93%. This paper provides evidence that medical facilities’ burden can be lowered through AI technology’s effective use. Implementation of this technique also reduces the risk of spreading the disease and rises in cases as the doctors and patients will not require any physical at the screening level. The images that were misclassified were due to less opacity in the left and right upper lobe and suprahilar on posterior-to-anterior x-ray images, which is very similar to normal x-ray images. The observations are from a limited amount of data set, which can be enhanced as more data becomes available for future research. The models then can be made country-specific to provide more detailed insights. The models have been trained on 20 epochs which can be increased on computer systems with enhanced processing capabilities. Further different deep learning techniques and models may be implemented for comparison of results with respect to multimedia medical image screening. The models selected and implemented in this study can be a base for further research in this domain.
  26 in total

Review 1.  Computer-aided diagnosis in medical imaging: historical review, current status and future potential.

Authors:  Kunio Doi
Journal:  Comput Med Imaging Graph       Date:  2007-03-08       Impact factor: 4.790

2.  Low-light image enhancement of high-speed endoscopic videos using a convolutional neural network.

Authors:  Pablo Gómez; Marion Semmler; Anne Schützenberger; Christopher Bohr; Michael Döllinger
Journal:  Med Biol Eng Comput       Date:  2019-03-21       Impact factor: 2.602

3.  Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets.

Authors:  Yujin Oh; Sangjoon Park; Jong Chul Ye
Journal:  IEEE Trans Med Imaging       Date:  2020-05-08       Impact factor: 10.048

4.  CT Imaging Features of 2019 Novel Coronavirus (2019-nCoV).

Authors:  Michael Chung; Adam Bernheim; Xueyan Mei; Ning Zhang; Mingqian Huang; Xianjun Zeng; Jiufa Cui; Wenjian Xu; Yang Yang; Zahi A Fayad; Adam Jacobi; Kunwei Li; Shaolin Li; Hong Shan
Journal:  Radiology       Date:  2020-02-04       Impact factor: 11.105

5.  Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus-Infected Pneumonia.

Authors:  Qun Li; Xuhua Guan; Peng Wu; Xiaoye Wang; Lei Zhou; Yeqing Tong; Ruiqi Ren; Kathy S M Leung; Eric H Y Lau; Jessica Y Wong; Xuesen Xing; Nijuan Xiang; Yang Wu; Chao Li; Qi Chen; Dan Li; Tian Liu; Jing Zhao; Man Liu; Wenxiao Tu; Chuding Chen; Lianmei Jin; Rui Yang; Qi Wang; Suhua Zhou; Rui Wang; Hui Liu; Yinbo Luo; Yuan Liu; Ge Shao; Huan Li; Zhongfa Tao; Yang Yang; Zhiqiang Deng; Boxi Liu; Zhitao Ma; Yanping Zhang; Guoqing Shi; Tommy T Y Lam; Joseph T Wu; George F Gao; Benjamin J Cowling; Bo Yang; Gabriel M Leung; Zijian Feng
Journal:  N Engl J Med       Date:  2020-01-29       Impact factor: 176.079

6.  Automated chest X-ray reading for tuberculosis in the Philippines to improve case detection: a cohort study.

Authors:  R H H M Philipsen; C I Sánchez; J Melendez; W J Lew; B van Ginneken
Journal:  Int J Tuberc Lung Dis       Date:  2019-07-01       Impact factor: 2.373

7.  Predicting COVID-19 Pneumonia Severity on Chest X-ray With Deep Learning.

Authors:  Joseph Paul Cohen; Lan Dao; Karsten Roth; Paul Morrison; Yoshua Bengio; Almas F Abbasi; Beiyi Shen; Hoshmand Kochi Mahsa; Marzyeh Ghassemi; Haifang Li; Tim Duong
Journal:  Cureus       Date:  2020-07-28

8.  Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists.

Authors:  Pranav Rajpurkar; Jeremy Irvin; Robyn L Ball; Kaylie Zhu; Brandon Yang; Hershel Mehta; Tony Duan; Daisy Ding; Aarti Bagul; Curtis P Langlotz; Bhavik N Patel; Kristen W Yeom; Katie Shpanskaya; Francis G Blankenberg; Jayne Seekins; Timothy J Amrhein; David A Mong; Safwan S Halabi; Evan J Zucker; Andrew Y Ng; Matthew P Lungren
Journal:  PLoS Med       Date:  2018-11-20       Impact factor: 11.069

9.  COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images.

Authors:  Linda Wang; Zhong Qiu Lin; Alexander Wong
Journal:  Sci Rep       Date:  2020-11-11       Impact factor: 4.379

10.  Towards the sustainable development of smart cities through mass video surveillance: A response to the COVID-19 pandemic.

Authors:  Mohammad Shorfuzzaman; M Shamim Hossain; Mohammed F Alhamid
Journal:  Sustain Cities Soc       Date:  2020-11-05       Impact factor: 10.696

View more
  7 in total

1.  Gauging the Impact of Artificial Intelligence and Mathematical Modeling in Response to the COVID-19 Pandemic: A Systematic Review.

Authors:  Afshan Hassan; Devendra Prasad; Shalli Rani; Musah Alhassan
Journal:  Biomed Res Int       Date:  2022-03-14       Impact factor: 3.411

Review 2.  Role of Artificial Intelligence in COVID-19 Detection.

Authors:  Anjan Gudigar; U Raghavendra; Sneha Nayak; Chui Ping Ooi; Wai Yee Chan; Mokshagna Rohit Gangavarapu; Chinmay Dharmik; Jyothi Samanth; Nahrizul Adib Kadri; Khairunnisa Hasikin; Prabal Datta Barua; Subrata Chakraborty; Edward J Ciaccio; U Rajendra Acharya
Journal:  Sensors (Basel)       Date:  2021-12-01       Impact factor: 3.576

3.  Tuberculosis detection in chest radiograph using convolutional neural network architecture and explainable artificial intelligence.

Authors:  Saad I Nafisah; Ghulam Muhammad
Journal:  Neural Comput Appl       Date:  2022-04-19       Impact factor: 5.102

4.  Detection of COVID-19 from chest X-ray images: Boosting the performance with convolutional neural network and transfer learning.

Authors:  Sohaib Asif; Yi Wenhui; Kamran Amjad; Hou Jin; Yi Tao; Si Jinhai
Journal:  Expert Syst       Date:  2022-07-29       Impact factor: 2.812

5.  Superlative Feature Selection Based Image Classification Using Deep Learning in Medical Imaging.

Authors:  Mamoona Humayun; Muhammad Ibrahim Khalil; Ghadah Alwakid; N Z Jhanjhi
Journal:  J Healthc Eng       Date:  2022-09-26       Impact factor: 3.822

6.  Detecting COVID-19 infection status from chest X-ray and CT scan via single transfer learning-driven approach.

Authors:  Partho Ghose; Muhaddid Alavi; Mehnaz Tabassum; Md Ashraf Uddin; Milon Biswas; Kawsher Mahbub; Loveleen Gaur; Saurav Mallik; Zhongming Zhao
Journal:  Front Genet       Date:  2022-09-21       Impact factor: 4.772

Review 7.  Viral outbreaks detection and surveillance using wastewater-based epidemiology, viral air sampling, and machine learning techniques: A comprehensive review and outlook.

Authors:  Omar M Abdeldayem; Areeg M Dabbish; Mahmoud M Habashy; Mohamed K Mostafa; Mohamed Elhefnawy; Lobna Amin; Eslam G Al-Sakkari; Ahmed Ragab; Eldon R Rene
Journal:  Sci Total Environ       Date:  2021-08-21       Impact factor: 7.963

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.