Literature DB >> 36033909

SEL-COVIDNET: An intelligent application for the diagnosis of COVID-19 from chest X-rays and CT-scans.

Ahmad Al Smadi1,2, Ahed Abugabah2, Ahmad Mohammad Al-Smadi3, Sultan Almotairi4,5.   

Abstract

COVID-19 detection from medical imaging is a difficult challenge that has piqued the interest of experts worldwide. Chest X-rays and computed tomography (CT) scanning are the essential imaging modalities for diagnosing COVID-19. All researchers focus their efforts on developing viable methods and rapid treatment procedures for this pandemic. Fast and accurate automated detection approaches have been devised to alleviate the need for medical professionals. Deep Learning (DL) technologies have successfully recognized COVID-19 situations. This paper proposes a developed set of nine deep learning models for diagnosing COVID-19 based on transfer learning and implementation in a novel architecture (SEL-COVIDNET). We include a global average pooling layer, flattening, and two dense layers that are fully connected. The model's effectiveness is evaluated using balanced and unbalanced COVID-19 radiography datasets. After that, our model's performance is analyzed using six evaluation measures: accuracy, sensitivity, specificity, precision, F1-score, and Matthew's correlation coefficient (MCC). Experiments demonstrated that the proposed SEL-COVIDNET with tuned DenseNet121, InceptionResNetV2, and MobileNetV3Large models outperformed the results of comparative SOTA for multi-class classification (COVID-19 vs. No-finding vs. Pneumonia) in terms of accuracy (98.52%), specificity (98.5%), sensitivity (98.5%), precision (98.7%), F1-score (98.7%), and MCC (97.5%). For the COVID-19 vs. No-finding classification, our method had an accuracy of 99.77%, a specificity of 99.85%, a sensitivity of 99.85%, a precision of 99.55%, an F1-score of 99.7%, and an MCC of 99.4%. The proposed model offers an accurate approach for detecting COVID-19 patients, which aids in the containment of the COVID-19 pandemic.
© 2022 The Author(s).

Entities:  

Keywords:  COVID-19; CT-scans; Classification; Deep Learning; Pneumonia; Transfer learning; X-ray images

Year:  2022        PMID: 36033909      PMCID: PMC9398554          DOI: 10.1016/j.imu.2022.101059

Source DB:  PubMed          Journal:  Inform Med Unlocked        ISSN: 2352-9148


Introduction

The year 2020 has been challenging for the entire planet and will be considered a year unlike any other. The world has seen the emergence of a unique coronavirus (COVID-19). The pandemic’s effect extends beyond the loss of countless lives. It has far-reaching consequences for mental health, a perpetual state of terror, economic misery, and social disturbance, among other things. Worldwide, 594 million individuals have been infected, with over 6.5 million fatalities and 566 million recovery cases documented by August 08, 2022 [1]. Coronaviruses are a family of viruses that can cause illnesses such as the common cold, severe acute respiratory syndrome (SARS), and Middle East respiratory syndrome (MERS) [2]. Signs and symptoms of 2019-nCoV may appear 2 to 14 days after exposure. After exposure to the virus and before symptoms appear, this period is called the incubation period [3]. It can still spread the COVID-19 infection before developing symptoms. Common signs and symptoms may include fever, cough and tiredness, shortness of breath or difficulty breathing, etc. Symptoms of COVID-19 can range from very mild to severe. Some people have few symptoms [4]. COVID-19 therapy sometimes doesn’t eradicate the virus; instead, it helps alleviate symptoms. Many approaches for detecting SARS-CoV-2 infection include real-time reverse transcription-polymerase chain reaction (RT-PCR), isothermal nucleic acid amplification, and microarrays [5]. Most nations’ health authorities have adopted the RT-PCR technology, which is widely considered the standard for molecular diagnosis of viral and bacterial illnesses [6]. Doctors commonly use a reverse transcription-polymerase chain reaction (RT-PCR) test on blood and sputum specimens to diagnose COVID-19. This test detects evidence of the virus’s biologically-derived component in the patient’s blood. The virus’s remnants don’t start showing up effectively as a susceptible test. Numerous recent studies indicate that chest computer tomography (CT) works better than laboratory tests in screening for COVID-19 [7]. Chest computed tomography (CT) is a more accurate, effective, and rapid method of diagnosing COVID-19 than RT-PCR. Doctors commonly utilize chest CT imaging to diagnose pneumonia [8]. CT scans are used to provide comprehensive three-dimensional imaging of the lung. Doctors analyze chest CT scans for fluid or pus in the lungs or any other signs consistent with COVID-19 infection. Chest CTs are typically quick and painless to do [8]. Likewise, owing to the cheap cost and rapid image capture approach associated with chest radiograph (X-ray) image-based diagnostics, it may be a more appealing and accessible tool for identifying the beginning of the illness. Since technology has been used in medical research, medical equipment and diagnostics have advanced to a new level. In many different fields, including computer vision [9], healthcare systems [10], and most recently, COVID-19 diagnosis [11], [12], deep-learning methods have been able to produce results that are considered to be state-of-the-art. This is owing to their capacity to autonomously extract representations from learning data that are important to their predictions. The state-of-the-art IoT gadgets have simplified difficult processes and aided in real-time monitoring—technology advances at a breakneck pace [13]. Today, researchers can produce a diversified and broad variety of substantial feature sets using deep learning, which is impossible for a professional to do [14]. Machine learning (ML) and deep learning (DL) have amassed a plethora of applications over the last several decades and have shown to be invaluable in resolving complex medical use cases [15]. For example, in order to comprehend how to tell a healthy lung from an infected lung, machine learning algorithms need to be organized and have a set of features to predict the outcome [16], [17]. On the other hand, with DL, the classifier automatically identifies healthy lungs from infected lungs using network characteristics generated automatically [18]. With the ability to generate features autonomously, DL has developed into a strong tool that eliminates the need for human features extraction. DL algorithms have seen several advances over the decades and are now widely employed to identify problems in clinical imaging [19]. Therefore, deep learning algorithms can tackle new problems by using previously acquired information. When developing deep learning techniques with small datasets, areas of interest in those images are often incorrectly identified, a problem that is seldom covered in the current literature. As a result, we examined our models’ performance and picked just the highest performing ones based on their ability to identify Covid-19 seen on X-ray images correctly. Additionally, prior work often fails to illustrate how their suggested models function when confronted with unbalanced datasets, which is frequently problematic. We diversify our study in this section by considering small, unbalanced, and balanced and provide a full discussion of our findings. This article reviews current work on the subject and considers the possibility of presenting a successful deep learning-based screening technique for detecting patients with COVID-19 using chest X-ray and CT scan images and how our model overcomes earlier research’s limitations. The literature’s overwhelming studies fall into one of two classes (COVID-19 vs. No-finding). However, just a few articles addressed many classes. With expert-level automation, we anticipate that this technology will aid in the testing of COVID-19 patients. The following are the paper’s significant contributions: This work developed a collection of deep learning models in a novel architecture (SEL-COVIDNET) to aid in the early identification of patients with COVID-19 efficiently. Comprehensive experiments were conducted on chest X-ray and CT scan images (Balanced dataset, Unbalanced dataset) to verify the proposed method’s efficacy. Compared the proposed model to various state-of-the-art architectures in terms of deep feature extraction and categorization of COVID-19 chest X-ray and CT scan images. Analyzed the proposed model’s performance using six evaluation measures: sensitivity, specificity, precision, F1-score, accuracy, and Matthew’s correlation coefficient (MCC). Thus, the proposed method outperforms relatively in comparison with SOTA methods. The rest of this paper is organized as follows: The related work is detailed in Section 2. Material and methods are discussed in Section 2. The experimental results of the proposed SEL-COVIDNET are presented in Section 4. Comparative results and discussion are described in Section 5. Finally, the conclusion and future direction are presented in Section 6.

Related work

Deep learning has garnered much attention lately in the battle against the Covid-19 epidemic. Numerous deep learning algorithms have been recently presented as Covid-19 diagnostic tools to assist doctors in making more informed medical decisions. This section discusses much research that is relevant to this study. Researchers have been concentrating their efforts on deep learning algorithms for identifying COVID-19 in chest X-rays. According to Das et al. [20], utilizing a DL-based network in lieu of CT scan scanners is a cost-effective option. Ozturk et al. [21] developed an architecture named DarkNet, a classifier model for classifying positive and negative COVID-19 chest X-ray images and a multi-class classifier for detecting COVID-19, pneumonia, and normal images. Khan et al. [22] utilized the ImageNet dataset for training the Xception model as a pre-trained network. Apostolopoulos et al. [23] trained a model from scratch and retrieved features for the classification task using MobileNet. Ucar et al. [24] employed the Bayesian optimization approach to optimize the SqueezeNet model on the COVID-19 diagnostic. Loey et al. [33] constructed the model using generative adversarial networks (GANs) and transfer learning with multi-class classifiers for detecting COVID-19, pneumonia, and normal images. Luz et al. [34] improved the EfficientNet model to classify COVID-19 using chest X-ray images. A comparison of related work methods for detecting COVID-19 using CT scan images. (CAP) community-acquired pneumonia. Chhikara et al. [35] built a deep transfer learning-based model using Inception-V3-Net to detect COVID-19 from chest X-rays and CT scans. Apostolopoulos et al. [36] introduced a deep learning-based diagnostic method for COVID-19. A binary classifier model and a multi-class classifier were conducted for COVID-19 detection. Hemdan et al. [37] developed a COVIDX-Net model for COVID-19 detection using seven distinct deep models. Mehmood et al. [38] proposed a DL-based technique using batch normalization for classifying three binary classes. Abugabah et al. [39] proposed a COVID-3D-SCNN model and attained promising results for multi-classification of COVID-19. As a result, many researchers have proposed deep learning methods for detecting COVID-19 based on X-ray images [38], [40], [41], [42], [43], [44], [45], [46], [47]. More information about other work methods for detecting COVID-19 using X-ray images is depicted in Table 11 (see inAppendix).
Table 11

An overview comparison of related work methods for detection of COVID-19 using chest X-ray images.

StudyNumber of casesMethodsPerformance
Ozturk et al. [21]125 with COVID-19 500 normal 500 pneumoniaDarkCovidNetAccuracy of 87.02 for 3-classes Accuracy of 98.08 2-class

Khan et al. [22]290 COVID-19 1203 Normal 931 Viral Pneumonia 660 Bacterial PneumoniaCoroNetAccuracy of 95 for 3-classes Accuracy of 89.6 for 4-classes

Apostolopoulos et al. [23]A large-scale dataset of 3905 7-classesMobileNet v2Accuracy of 87.66 for 7-classes Accuracy of 99.18 for 2-classes

Loey et al. [33]69 COVID-19 79 Normal 79 Bacterial Pneumonia 79 PneumoniaGAN transfer learning models (Alexnet, Googlenet, Resnet18)Accuracy of 80.56 for 4-classes Accuracy of 85.19 for 3-classes Accuracy of 100 for 2-classes

Luz et al. [34]152 COVID-19 7966 Normal 5421 PneumoniaEfficientNeAccuracy of 93.9 for 3-classes

Chhikara et al. [35]2313 COVID-19 2313 Normal, 2313 Viral PneumoniaInception-V3Accuracy of 84.95 for 3-classes

Apostolopoulos et al. [36]224 COVID-19 700 Bacterial Pneumonia 504 NormalVGG19Accuracy of 98.75 for 2-classes Accuracy of 93.48 for 3-classes

Hemdan et al. [37]25 COVID-19 25 Non-covidCOVIDX-NetAccuracy of 90 for 2-classes

Maia et al. [40]217 COVID-19 108 Other Diseases 112 HealthyConvolutional SVMAccuracy of 98.14 for 3-classes

Ibrahim et al. [41]371 COVID-19 4237 Non-COVID-19 4078 Bacterial Pneumonia 2882 HealthyAlexNetAccuracy of 99.62 for 2-classes Accuracy of 94.00 for 3-classes Accuracy of 93.42 for 4-classes

Sethy et al. [42]25 COVID-19 25 Non-CovidResNet50+SVMAccuracy of 95.38 for 2-classes

Suat et al. [43]331 COVID-19 1050 Pneumonia 1050 Non-CovidCapsNetAccuracy of 97.24 for 2-classes Accuracy of 84.22 for 3-classes

Zhang et al. [44]70 COVID-19 1008 PneumoniaCNN+Backbone networkAccuracy of 95.2 for 2-classes

Ghoshal et al. [45]68 COVID-19 2786 Bacterial Pneumonia 1583 Normal 1504 Viral PneumoniaBayesian CNN+DropweightsAccuracy of 92.90 for 4-classes

Panwar et al. [46]142 COVID-19 142 NormalnCOVnetAccuracy of 88 for 2-classes

Rahman et al. [47]3616 COVID-19 8851 Normal 6012 Non-COVIDDenseNet201Accuracy of 95.11 for 3-classes

Mehmood et al. [38]1290 COVID-19 1946 NormalCNN-based technique using batch normalizationAccuracy of 96.6 for 2-classes

Montalbo [64]1281 COVID-19 3270 Normal 4657 PneumoniaTruncated DenseNetAccuracy of 97.8 for 3-classes

Abugabah et al. [39]575 COVID-19 1200 Non-COVID 1400 PneumoniaCOVID-3D-SCNNAccuracy of 96.7 for 3-classes

Saad et al. [66]2628 COVID-19 1620 Non-COVIDDeep feature concatenation technique (DFC)Accuracy of 99.3 for 2-classes
Additionally, other studies have indicated deep learning algorithms for COVID-19 detection based on CT scans. Wang et al. [25] developed the M-Inception model for classifying viral pneumonia and COVID-19. Their method attained a total accuracy of the test dataset of 73.1%. Zheng et al. [26] introduced a novel deep learning model (DeCovNet) that achieved a 90.1% accuracy rate. Li et al. [27] presented the COVNet model trained using ResNet50. The experimental findings indicated that the COVNet model had an accuracy of 0.96 for the COVID-19 classification. Song et al. [28] created a deep learning model (DeepPneumonia) for COVID-19 detection. The model’s overall accuracy for COVID-19 vs. bacterial pneumonia classification was 86.0%, while its overall accuracy for COVID-19 vs. healthy person classification was 94.0%. Table 1 has more information about other work methods for detecting COVID-19 with CT scans.
Table 1

A comparison of related work methods for detecting COVID-19 using CT scan images. (CAP) community-acquired pneumonia.

StudyNumber of casesMethodsPerformance %
Wang et al. [25]44 COVID-19 55 Viral PneumoniaM-InceptionAccuracy of 73.1 for 2-classes

Zheng et al. [26]313 COVID-19 229 No-findingsDeCovNetAccuracy of 90.1 for 2-classes

Li et al. [27]1296 COVID-19 1735 Pneumonia 1325 Non-PneumoniaCOVNetAccuracy of 96 for 3-classes

Song et al. [28]88 COVID-19 101 Bacteria Pneumonia 86 HealthyDeepPneumoniaAccuracy of 94.0 for 2-classes Accuracy of 86.0 for 3-classes

Wang et al. [29]325 COVID-19 740 PneumoniaInceptionNetAccuracy of 89.50 for 2-classes

Shi et al. [30]1658 Non-COVID 1027 CAPRandom ForestAccuracy of 87.9 for 2-classes

Li et al. [31]1292 COVID-19 1735 CAP 1325 Non-PneumoniaResNet50 backboneAccuracy of 96.3 for 3-classes

Xu et al. [32]219 COVID-19 224 Influenza-A 175 HealthyAttention oriented model based on ResNet18Accuracy of 86.7 for 3-classes

Material and methods

This section describes datasets and the pre-trained CNNs utilized to complete the proposed SEL-COVIDNet architecture.

Dataset description

COVID-19 infection cases have now surpassed 457 million worldwide, with an estimated 6 million deaths [1]. However, severely contaminated nations have attempted to openly share clinical and radiographic data. Thereby, we have used three publicly available datasets, including as follows: X-ray-Dataset 1: We collected this data from two different sources; source one is named “X-ray Image DataSet [48]” with 125 COVID-19, 500 no-findings, and 500 pneumonia. The second source was obtained by a team of researchers from different universities [49]. It is worth mentioning that the first release of this source was used, which has 219 COVID-19 positive images, 1341 no-findings, and 1345 pneumonia images. To sum up, dataset 1 consists of three classes, including COVID-19, No-finding, and pneumonia. The total number of images is 344, 1841, and 1845, respectively. X-ray-Dataset 2: Unbalanced data divided into three classes [50]. The total number of images for the COVID-19, No-findings, and pneumonia classes is 576, 1583, and 4273. X-ray-Dataset 3: Balanced data is divided into three classes [51]. 6939 samples were collected, with 2313 samples being utilized in each category. CT-Dataset 4: This dataset, named the “SARS-CoV-2” CT scan dataset, has 1252 CT scans of patients tested positive for (COVID-19) and 1229 CT scans of patients who tested negative for COVID-19, totaling 2481 CT scans. These statistics were gathered from genuine patients in Sao Paulo, Brazil-hospitals [52]. Fig. 1 illustrates the distribution of the used datasets.
Fig. 1

Datasets distribution.

Datasets distribution.

Pre-trained CNNs

VGGNet: VGGNet was developed by Karen Simonyan and Andrew Zisserman based on the CNN architecture [53]. The VGGNet functioned magnificently on the imageNet dataset. To increase image extraction capabilities, the VGGNet employed smaller filters of 3 × 3, as opposed to the AlexNet filter of 11 × 11. VGG16 and VGG19 are two variants of this deep network design with varying depths and layers. VGG19 is a deeper version of VGG16. However, the amount of parameters in VGG19 is greater, making it more costly to train the network than in VGG16. InceptionV3: Inception v3 is an improved version of the Inception family’s CNN design, and it was introduced by Szegedy et al. [54], which is the third version of Google’s Inception CNN. Inceptionv3 was designed to allow for deeper networks while limiting the number of parameters to “around 25 million” compared to 60 million for AlexNet [55]. InceptionResNetV2: InceptionResNet-V2 derives from the Inception V3 model, and it is far deeper than the InceptionV3. It has 164 layers and is composed of Inception and residual connections. InceptionResNet-V2 is trained on images from the ImageNet collection totaling over a million [56]. Residual Network (ResNet): ResNet was first developed in 2015 by Kaiming He et al. [57] at Microsoft. ResNet’s central concept is to provide an “identity shortcut connection” that bypasses one or more layers. ResNet comes in various variations that all operate on the same principle but have various layers, such as ResNet-34, ResNet-50, and ResNet-101. MobileNetV2: MobileNetV2 was first developed by Sandler et al. [58]. It is optimized for systems with limited processing capacity. It is built on an inverted residual structure, with residual connections between bottleneck layers. MobileNetV3: MobileNetV3 is a new generation of MobileNetV2 that was first developed by Howard et al. [59]. It has been tailored to mobile phone Processors using a combination of hardware-aware network architecture search (NAS) and the NetAdapt algorithm and then enhanced further using innovative architectural advancements. MobileNetV3 comes in two terminologies that operate on the same principle, MobileNetV3-Large, and MobileNetV3-Small. Compared to MobileNetV2, the authors in [59] stated that the MobileNetV3-Large is 3.2% more accurate in ImageNet classification while reducing latency by 15%. MobileNetV3-Small is 4.6% more accurate than MobileNetV2 while lowering latency by 5%. DenseNet: DenseNet was first developed in 2016 by Huang et al. [60]. It solves the vanishing gradient problem, makes feature propagation stronger, encourages feature reuse, and significantly reduces the number of parameters.

Proposed SEL-COVIDNET description

We developed a novel classification model for determining the COVID-19 status in 2D chest images. Fig. 2 displays the whole process of our proposed SEL-COVIDNET, which is built on nine distinct DL architectures: DenseNet121, VGG19, InceptionV3, InceptionResNetV2, ResNet50, ResNet101, MobileNetV2, MobileNetV3-Small, and MobileNetV3-Large.
Fig. 2

Flowchart of proposed SEL-COVIDNET schematic for classifying the COVID-19 status in chest images.

The SEL-COVIDNET framework consists of several steps that enable the classification of novel COVID19. Firstly, starting with preprocessing, all chest images are collected in a single dataset and scaled to a fixed size of 224 × 224 pixels. The preprocessed dataset is divided 80–20 to begin the training phase of one of nine deep learning models that have been tuned. Thereby, 20% of images will be exploited during the testing stage. It is worth mentioning that we use the pre-trained ImageNet weights for the tuned model. Flowchart of proposed SEL-COVIDNET schematic for classifying the COVID-19 status in chest images. The output of the tuned pre-trained model is used without the top output layers; after that, it runs through the Global Average Pooling (GAP) layer. GAP has some advantages, like making sure that feature maps and categories match up. Another benefit of global average pooling is that there is no parameter to optimize, so over-fitting is not a problem at this layer, i.e., GAP sums up the spatial information, so it is more resistant to changes in the input that move the data around [61]. Then, the output layer is flattened in order to stack two fully connected dense layers with nodes of 512 and 512, respectively. The dense layer provides learned features derived from the preceding layer’s combinational features. These two fully connected dense layers contain ‘ReLU’ as an activation function. Hence, the number of classes to be predicted is two for the binary classifier model and three for a multi-class classifier. Therefore, we add a final output fully connected layer having two/three nodes using ‘Sigmoid’ as an activation function for the binary classifier and ‘Softmax’ as an activation function for the multi-class classifier. The SEL-COVIDNET model is based on DL, which uses different hyperparameters for training. We used the categorical cross-entropy loss function and Adam optimizer as an optimization algorithm for controlling sparse gradients on noisy issues for a multi-class classifier. While a binary cross-entropy for a binary-class classifier.

Experiments and analysis

The SEL-COVIDNET framework, including deep learning models, has been implemented using Python in the Google Colab notebook using the graphical processing unit (GPU). It is worth mentioning that all the models are trained for 50 epochs. As a result, this study used a callback function called Reduce LR on Plateau (RLRoP) to improve the models’ grasp and flexibility throughout training without putting a significant demand on computer resources [62]. We employed the early stopping that the model can stop training once the validation error reaches the minimum. Table 2 shows the hyperparameters for the proposed method.
Table 2

Hyperparameter configuration.

Parameters
Activation FunctionReLU/Sigmoid/Sofmax
Base Learning Rate0.001
Minimum Learning Rate1e-5
Epochs50
Batch Size32
OptimizerAdam
Loss FunctionBinary Cross-Entropy for a binary classifier Categorical Cross-Entropy for multi-class classifier
Early Stopping patience10
MonitorValidation accuracy
Factor0.1
ReduceLROnPlateau patience2
We employed six different metrics to examine the accuracy of COVID-19 categorization in the testing chest images to determine the efficacy of each DL model in the SEL-COVIDNET. The accuracy (Acc), sensitivity (Sen), specificity (Spc), positive predictive value-precision (Ppv), F1-score, and Matthew’s correlation coefficient (MCC) quantify the actual and predicted classes represented in Eq. (1)- 6, respectively. Mathematically, each performance statistic is denoted as follows: Hyperparameter configuration. Where “TP”, “TN”, “FP”, “FN” stand for true positive, true negative, false positive, and false negative, respectively.

Performance evaluation on X-ray-Dataset 1

Among all 3-class DL models used in the SEL-COVIDNET, the VGG19 model had the lowest accuracy of 88.55%, while the InceptionResNetV2 and MobileNetV3Large models had the highest accuracy scores, 96.43% and 96.31%, respectively. Regarding individual class accuracy, the InceptionResNetV2 had an accuracy of 99.6% for the COVID-19 class, followed by MobileNetV3Small. Both MobileNetV3Small and MobileNetV3Large had nearly identical accuracy for the other two classes. Table 3 depicts a detailed comparison of multi-class DL models using X-ray-Dataset 1. Moreover, for the binary-class DL models used in the SEL-COVIDNET, the MobileNetV2 model had the lowest accuracy of 87.5%. In contrast, the InceptionResNetV2 model had the highest accuracy scores, 99.32%, followed by the ResNet101 model. Similarly, the same model had the best accuracy of 99.3% for the COVID-19 class for individual class accuracy. Table 4 depicts a detailed comparison of binary-class DL models using X-ray-Dataset 1. The confusion matrix of the DL models used in the SEL-COVIDNET on the test X-ray-Dataset 1 is shown in Fig. 3, Fig. 4.
Table 3

Evaluation performance of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 1 (0: COVID-19, 1: No-finding, 2: Pneumonia). The best overall accuracy is reported reported in bold red.

ModelClassAccSenSpcPpvF1-ScoreMCCOverall Acc (%)
DensNet12100.9910.9710.9930.9310.9500.94694.70
10.9530.9350.9680.9610.9480.906
20.9500.9540.9450.9370.9450.899

InceptionV300.9880.9420.9920.9150.9290.92292.86
10.9370.9220.9500.9400.9310.873
20.9320.9330.9320.9200.9270.864

VGG1900.9640.7830.9810.7940.7880.76988.55
10.9090.8950.9210.9050.9000.816
20.8980.8950.9000.8830.8890.794

InceptionResNetV200.9960.9710.9990.9850.9780.97696.43
10.9670.9570.9750.9700.9630.933
20.9660.9700.9610.9550.9630.931

ResNet5000.9900.9130.9970.9690.9400.93592.86
10.9370.9060.9640.9550.9290.874
20.9300.9540.9090.8990.9260.861

ResNet10100.9830.8990.9910.8990.8990.88992.36
10.9360.9190.9500.9390.9290.871
20.9290.9330.9250.9130.9230.857

MobileNetV200.9930.9420.9970.9700.9560.95294.83
10.9520.9600.9460.9370.9480.904
20.9520.9380.9640.9560.9470.903

MobileNetV3Small00.9950.9570.9990.9850.9710.96895.57
10.9580.9460.9680.9620.9540.916
20.9580.9650.9520.9450.9550.916

MobileNetV3Large00.9950.9570.9990.9850.9710.96896.31
10.9670.9570.9750.9700.9630.933
20.9640.9700.9590.9530.9610.928
Table 4

Evaluation performance of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 1 (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red.

ModelClassAccSenSpcPpvF1-scoreMCCOverall Acc (%)
DensNet12100.9860.9420.9950.9700.9560.94898.64
10.9860.9950.9420.9890.9920.948

InceptionV300.9840.9130.9970.9840.9470.93998.41
10.9840.9970.9130.9840.9910.939

VGG1900.9660.8700.9840.9090.8890.86996.59
10.9660.9840.8700.9760.9800.869

InceptionResNetV200.9930.9710.9970.9850.9780.97499.32
10.9930.9970.9710.9950.9960.974

ResNet5000.9860.9420.9950.9700.9560.94898.64
10.9860.9950.9420.9890.9920.948

ResNet10100.9910.9710.9950.9710.9710.96699.09
10.9910.9950.9710.9950.9950.966

MobileNetV200.8750.2031.0001.0000.3370.42087.50
10.8751.0000.2030.8710.9310.420

MobileNetV3Small00.9860.9280.9970.9850.9550.94898.64
10.9860.9970.9280.9870.9920.948

MobileNetV3Large00.8570.0871.0001.0000.1600.27385.68
10.8571.0000.0870.8550.9220.273
Fig. 3

Confusion matrix of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 1. (0: COVID-19, 1: No-finding, 2: Pneumonia).

Fig. 4

Confusion matrix of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 1. (0: COVID-19, 1: No-finding).

Evaluation performance of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 1 (0: COVID-19, 1: No-finding, 2: Pneumonia). The best overall accuracy is reported reported in bold red. Confusion matrix of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 1. (0: COVID-19, 1: No-finding, 2: Pneumonia). Evaluation performance of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 1 (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red. Confusion matrix of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 1. (0: COVID-19, 1: No-finding).

Performance evaluation on X-ray-Dataset 2

Here, all models showed an overall accuracy of above 94%. And out of these models, both DensNet121 and MobileNetV3Large outperformed the rest of the models with an overall accuracy of 98.52%. Regarding COVID-19 class accuracy, the DenseNet121 and MobileNetV3Small models performed well. All models showed an overall accuracy of above 98% for the binary class. And out of these models, both DensNet121 and InceptionV3 outperformed the rest of the models with an overall accuracy of 99.77%. Table 5, Table 6 depict a detailed comparison of multi-class and binary-class DL models using X-ray-Dataset 2. The confusion matrix of the DL models used in the SEL-COVIDNET on the test X-ray-Dataset 2 is shown in Fig. 5, Fig. 6
Table 5

Evaluation performance of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 2 (0: COVID-19, 1: No-finding, 2: Pneumonia). The best overall accuracy is reported in bold red.

ModelClassAccSenSpcPpvF1-scoreMCCOverall Acc (%)
DensNet12101.0001.0001.0001.0001.0001.00098.52
10.9850.9650.9920.9750.9700.960
20.9850.9910.9750.9870.9890.967

InceptionV300.9981.0000.9970.9750.9870.98697.75
10.9800.9500.9900.9680.9590.945
20.9770.9850.9630.9810.9830.949

VGG1900.9880.9220.9940.9380.9300.92394.95
10.9560.9120.9700.9090.9100.881
20.9560.9670.9330.9660.9670.901

InceptionResNetV200.9981.0000.9980.9830.9910.99198.29
10.9840.9590.9930.9770.9680.958
20.9830.9890.9700.9850.9870.962

ResNet5000.9991.0000.9990.9910.9960.99597.98
10.9810.9500.9910.9710.9600.947
20.9800.9880.9630.9810.9850.955

ResNet10100.9980.9831.0001.0000.9910.99097.67
10.9780.9530.9870.9590.9560.941
20.9770.9850.9610.9800.9820.948

MobileNetV200.9991.0000.9990.9910.9960.99597.44
10.9750.9270.9910.9700.9480.932
20.9730.9880.9410.9740.9810.938

MobileNetV3Small01.0001.0001.0001.0001.0001.00098.45
10.9840.9620.9920.9740.9680.958
20.9840.9910.9720.9860.9880.965

MobileNetV3Large00.9981.0000.9980.9830.9910.99198.52
10.9860.9590.9950.9840.9710.962
20.9860.9930.9720.9860.9900.969
Table 6

Evaluation performance of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 2 (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red.

ModelClassAccSenSpcPpvF1-scoreMCCOverall Acc (%)
DensNet12100.9981.0000.9970.9910.9960.99499.77
10.9980.9971.0001.0000.9980.994

InceptionV300.9981.0000.9970.9910.9960.99499.77
10.9980.9971.0001.0000.9980.994

VGG1900.9680.9390.9780.9390.9390.91796.76
10.9680.9780.9390.9780.9780.917

InceptionResNetV200.9950.9910.9970.9910.9910.98899.54
10.9950.9970.9910.9970.9970.988

ResNet5000.9911.0000.9870.9660.9830.97799.07
10.9910.9871.0001.0000.9940.977

ResNet10100.9840.9830.9840.9580.9700.95998.38
10.9840.9840.9830.9940.9890.959

MobileNetV200.9930.9830.9970.9910.9870.98299.31
10.9930.9970.9830.9940.9950.982

MobileNetV3Small00.9951.0000.9940.9830.9910.98899.54
10.9950.9941.0001.0000.9970.988

MobileNetV3Large00.9931.0000.9910.9750.9870.98399.31
10.9930.9911.0001.0000.9950.983
Fig. 5

Confusion matrix of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 2. (0: COVID-19, 1: No-finding, 2: pneumonia).

Fig. 6

Confusion matrix of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 2. (0: COVID-19, 1: No-finding).

Evaluation performance of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 2 (0: COVID-19, 1: No-finding, 2: Pneumonia). The best overall accuracy is reported in bold red. Confusion matrix of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 2. (0: COVID-19, 1: No-finding, 2: pneumonia). Evaluation performance of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 2 (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red. Confusion matrix of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 2. (0: COVID-19, 1: No-finding).

Performance evaluation on X-ray-Dataset 3

On the balanced X-ray-Dataset 3, all models showed an overall accuracy of above 93%. And out of these models, the MobileNetV3Large outperformed the rest of the models with an overall accuracy of 96.25%. For the individual class accuracy, the same model performed well. Concerning binary-class classification on the same dataset, all models showed an overall accuracy of above 96%, except the MobileNetV2 model attained an accuracy of 85.31%. Table 7, Table 8 compare multi-class and binary-class DL models using X-ray-Dataset 3. The confusion matrix of the DL models used in the SEL-COVIDNET on the test X-ray-Dataset 3 is shown in Fig. 7, Fig. 8
Table 7

Evaluation performance of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 3-Balanced (0: COVID-19, 1: No-finding, 2: Pneumonia). The best overall accuracy is reported in bold red.

ModelClassAccSenSpcPpvF1-scoreMCCOverall Acc (%)
DensNet12100.9930.9940.9920.9850.9890.98495.82
10.9620.9500.9680.9360.9430.914
20.9620.9310.9770.9540.9420.914

InceptionV300.9940.9940.9950.9890.9910.98796.11
10.9650.9650.9650.9330.9490.923
20.9630.9240.9820.9620.9430.915

VGG1900.9680.9550.9740.9480.9520.92793.23
10.9480.9460.9490.9030.9240.885
20.9490.8960.9750.9470.9210.884

InceptionResNetV200.9910.9830.9960.9910.9870.98195.68
10.9610.9590.9620.9270.9430.913
20.9610.9290.9770.9530.9410.912

ResNet5000.9930.9890.9950.9890.9890.98495.17
10.9550.9570.9550.9130.9340.901
20.9550.9090.9780.9550.9310.899

ResNet10100.9940.9890.9960.9910.9900.98595.03
10.9540.9420.9600.9220.9310.897
20.9530.9200.9700.9380.9290.894

MobileNetV200.9910.9830.9950.9890.9860.97994.81
10.9550.9350.9640.9290.9320.898
20.9510.9270.9630.9270.9270.890

MobileNetV3Small00.9930.9850.9970.9930.9890.98495.89
10.9630.9720.9590.9220.9460.919
20.9620.9200.9830.9640.9410.914

MobileNetV3Large00.9940.9870.9970.9930.9900.98596.25
10.9670.9740.9630.9300.9510.927
20.9650.9270.9840.9660.9460.920
Table 8

Evaluation performance of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 3-Balanced (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red.

ModelClassAccSenSpcPpvF1-scoreMCCOverall Acc (%)
DensNet12100.9910.9910.9910.9910.9910.98399.14
10.9910.9910.9910.9910.9910.983

InceptionV300.9820.9850.9780.9790.9820.96398.16
10.9820.9780.9850.9850.9820.963

VGG1900.9650.9720.9590.9590.9660.93196.54
10.9650.9650.9720.9720.9650.931

InceptionResNetV200.9830.9870.9780.9790.9830.96598.27
10.9830.9780.9870.9870.9830.965

ResNet5000.9830.9810.9850.9850.9830.96598.27
10.9830.9850.9810.9810.9830.965

ResNet10100.9870.9810.9940.9930.9870.97498.70
10.9870.9940.9810.9810.9870.974

MobileNetV200.8530.7061.0001.0000.8280.73985.31
10.8531.0000.7060.7730.8720.739

MobileNetV3Small00.9920.9890.9960.9960.9920.98599.24
10.9920.9960.9890.9890.9920.985

MobileNetV3Large00.9940.9940.9940.9940.9940.98799.35
10.9940.9940.9940.9940.9940.987
Fig. 7

Confusion matrix of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 2.(0: COVID-19, 1: No-finding, 2: Pneumonia).

Fig. 8

Confusion matrix of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 2. (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red.

Evaluation performance of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 3-Balanced (0: COVID-19, 1: No-finding, 2: Pneumonia). The best overall accuracy is reported in bold red. Confusion matrix of multi-class DL models used in the SEL-COVIDNET on X-ray dataset 2.(0: COVID-19, 1: No-finding, 2: Pneumonia). Confusion matrix of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 2. (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red. Evaluation performance of binary-class DL models used in the SEL-COVIDNET on X-ray dataset 3-Balanced (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red.

Performance evaluation on CT-Dataset 4

Out of the DL models used in the SEL-COVIDNET on the test CT-Dataset, both MobileNetV3Large and MobileNetV2 models outperformed the rest with an overall accuracy of 98.79%, followed by DensNet121 with an overall accuracy of 98.55%. It is worth mentioning that the VGGNET has been omitted for this experiment due to its bad performance. Table 9 depicts a detailed comparison of DL models using CT-Dataset. The confusion matrix of the DL models used in the SEL-COVIDNET on the test CT-Dataset is shown in Fig. 9.
Table 9

Evaluation performance of binary-class DL models used in the SEL-COVIDNET on CT dataset 4 (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red.

ModelClassAccSenSpcPpvF1-scoreMCCOverall Acc (%)
DensNet12100.9920.9960.9890.9890.9920.98498.59
10.9920.9890.9950.9960.9920.984

InceptionV300.9860.9920.9800.9800.9860.97298.59
10.9860.9800.9920.9920.9860.972

InceptionResNetV200.9780.9800.9760.9760.9780.95697.79
10.9780.9760.9800.9800.9780.956

ResNet5000.9460.9240.9670.9670.9450.89294.57
10.9460.9670.9240.9260.9460.892

ResNet10100.9500.9320.9670.9670.9490.90094.97
10.9500.9670.9320.9330.9500.900

MobileNetV200.9880.9840.9920.9920.9880.97698.79
10.9880.9920.9840.9840.9880.976

MobileNetV3Small00.9800.9880.9720.9730.9800.96097.99
10.9800.9720.9880.9880.9800.960

MobileNetV3Large00.9880.9920.9840.9840.9880.97698.79
10.9880.9840.9920.9920.9880.976
Fig. 9

Confusion matrix of DL models used in the SEL-COVIDNET on CT dataset 4. (0: COVID-19, 1: No-finding).

Evaluation performance of binary-class DL models used in the SEL-COVIDNET on CT dataset 4 (0: COVID-19, 1: No-finding). The best overall accuracy is reported in bold red. Confusion matrix of DL models used in the SEL-COVIDNET on CT dataset 4. (0: COVID-19, 1: No-finding).

Analysis of the DL models’ receiver operating characteristic (ROC) curves on the used datasets:

ROC is a critical assessment statistic for determining the success of any classification model. Besides, the area under the curve (AUC) represents the degree or measure of separability. It indicates the capability of the model for discriminating between classes. The larger the AUC, the more accurately the model predicts. This subsection analyzes the AUC-ROC for used DL models in the SEL-COVIDNET for the conducted dataset. Fig. 10 (see in Appendix). shows the AUC-ROC for used DL models in the SEL-COVIDNET on X-ray dataset 1. It can be observed from the left side of this figure that all of the models achieved amazing performances across all of their respective classes. Despite this, the VGG19 model had attained the lowest macro average of 0.97 and had noisy oscillations on its graph. Similarly, for binary class classification on the right side of the exact figure, the MobileNetV2 and MobileNetV3Large models had noisy oscillations on their graphs. In particular, DenseNet121, InceptionV3, InceptionResNetV2, RenNet, and MobileNetV3Small models achieved AUROCs of 1.00 across all regions, indicating that they outperformed the other DL models.
Fig. 10

Receiver operating characteristic of DL models used in the SEL-COVIDNET on X-ray dataset 1. The top side illustrates the 3-class classification, and the bottom side shows the 2-class classification.

The AUC-ROC for X-ray dataset 2 is shown in Fig. 11 (see in Appendix). We can observe that all of the models attained remarkable results across their respective classes. In 2-class classification (COVID-19 vs. No-finding), all models except the VGG19 model had AUROCs of 1.00 in all regions except for the VGG19 model.
Fig. 11

Receiver operating characteristic of DL models used in the SEL-COVIDNET on X-ray dataset 2. The top side illustrates the 3-class classification, and the bottom side shows the 2-class classification.

Regarding X-ray dataset 3, Its AUC-ROC figure is shown in Fig. 12 (see in Appendix). The lowest macro-average of 0.98 was attained by the VGG19 model for 3-class classification (COVID-19 vs. No-finding vs. pneumonia) and the MobileNetV2 model for 2-class classification (COVID-19 vs. No-finding). As noted, the model derives its lowest AUROC of 0.97 from X-ray images of patients infected with pneumonia at lower thresholds. The truncation’s unfavorable impacts were also seen in most graphs of the DL models concerning pneumonia class.
Fig. 12

Receiver operating characteristic of DL models used in the SEL-COVIDNET on X-ray dataset 3. The top side illustrates the 3-class classification, and the bottom side shows the 2-class classification.

The AUC-ROC graph on CT scan dataset 4 is depicted in Fig. 13 (see in Appendix). It can be observed that DenseNet121, InceptionV3, InceptionResNetV2, MobileNetV2, and MobileNetV3 all had AUROCs of 1.00 across all regions, which means they did better than the other DL models.
Fig. 13

Receiver operating characteristic of DL models used in the SEL-COVIDNET on CT dataset 4.

Discussion

COVID-19 has resulted in significant public health and safety concerns and has therefore become a worldwide issue [63]. Despite the paucity of PCR tests and their high cost [6], it was beneficial to make access to healthcare staff an artificial intelligence-based approach for rapidly and correctly predicting COVID-19. In this paper, we proposed an automated health monitoring system for the early diagnosis of COVID-19 using chest images that is less expensive, more accessible to rural populations, and has an easily disinfected, cleaned, and maintained acquisition apparatus. The SEL-COVIDNET model employs deep learning through nine pre-trained CNNs derived from the ImageNet database. Each network will predict a class based on the input image’s classification. Our approach has the benefit of assigning a score to each class prediction. Furthermore, it was conducted on balanced and balanced datasets. Through a thorough examination, the tuned InceptionResNetV2, MobileNetV3Large, and DenseNet121 models preserved the highest overall accuracy among other DL models. In that, the tuned InceptionResNetV2 achieved better performance for 3-class classification (COVID-19 vs. No-finding vs. Pneumonia) on dataset 1 with an overall test accuracy of 96.4%, an F1-score of 96.8%, a Sen of 96.6%, a Spc of 97.8%, a Ppv of 97%, and an MCC of 94.6%. Moreover, for the binary-class classification (COVID-19 vs. No-finding) on the same dataset, they had an accuracy score of 99.3%, an F1-score of 98.7%, a Sen of 97.4%, a Ppv of 99%. The SEL-COVIDNET model with tuned DenseNet121 outperformed for 3-class classification on dataset 2 with an overall test accuracy of 98.52%, an F1-score of 98.6%, a Sen of 98.5%, a Spc of 98.9%, a Ppv of 98.7%, and an MCC of 97.6%. And for the 2-class classification, we achieved an accuracy score of 99.8%, an F1-score of 99.7%, a Sen of 99.9%, and a Ppv of 99.6%. Further, the tuned MobileNetV3Large model outperformed other DL models for 3-class classification on dataset 3 (balanced data) with an overall test accuracy of 96.25%, an F1-score of 96.2%, a Sen of 96.3%, a Spc of 98.1%, a Ppv of 96.3%, and an MCC of 94.4%. And for the 2-class classification, they attained an accuracy score of 99.35%, an F1-score of 99.4%, a Sen of 99.4%, and a Ppv of 99.4%. Prateek et al. [35] attained an overall test accuracy of 97.70%. In [22], the authors introduced the CNN approach, in which they implemented the experiment on binary-class and multi-class classifications and attained 98.08% and 87.02% accuracy for binary and multiclassification, respectively. Montalbo et al. [64] proposed a model based on truncated DenseNet and achieved an accuracy of 97.7% for 3-class classification. In [65], the researcher proposed a truncation method based on deep CNNs and achieved promising results, with 97.4% accuracy in 3-class classification. For dataset 4 (CT-scan), the tuned MobileNetV3Large and MobileNetV2 models outperformed other DL models in the binary-class classification (COVID-19 vs. No-finding) with an overall test accuracy of 98.79%, an F1-score of 98.8%, a Sen of 98.8%, a Spc of 98.8%, a Ppv of 98.8%, and an MCC of 97.6%. Wang et al. [25] proposed a DL model for screening COVID-19 based on CT-Scan images and achieved an accuracy of 89.5%. In [66], researchers described a technique for classifying 2-class CT-Scan images based on deep feature concatenation. Their model attained an accuracy of 99.3%. However, Table 10 compares the proposed SEL-COVIDNET model to state-of-the-art (SOTA) methods from a broader perspective. The suggested SEL-COVIDNET model achieved a remarkable 98.52% accuracy. However, there is no direct comparison since each model has been trained on various classes and different datasets of examples of either X-ray or CT scan diagnosis. Thus, despite having a more challenging task, the SEL-COVIDNET architecture with the tuned model approach may be more effective and valuable in most cases than the previous research provided.
Table 10

Comparison between the SEL-COVIDNET model and SOTA methods. CAP denotes community-acquired pneumonia.

ModelDataset TypeOverall Acc (%)Ppv (%)Ses (%)F1-Score (%)
Wang et al. [29]X-ray (Normal vs. COVID-19 vs. Pneumonia)93.390.9%96.8%N/A

Wang et al. [25]CT Scans (COVID-19 vs. Non-COVID-19)89.5N/A0.87N/A

Al-Falluji et al. [67]X-ray (Normal vs. COVID-19 vs. Pneumonia)96.37100%94%N/A

Singh et al. [68]X-ray (Normal vs. COVID-19 vs. Pneumonia)95.896.1695.6095.88

Abbas et al. [69]X-ray (Normal vs. COVID-19 vs. SARS)95.12N/A97.91N/A

Ozturk et al. [21]X-ray (Normal vs. COVID-19 vs. Pneumonia)87.02N/AN/AN/A

Luz et al. [34]X-ray (Normal vs. COVID-19 vs. Pneumonia)93.51100.0%80.6%N/A

Montalbo [64]X-ray (Normal vs. COVID-19 vs. Pneumonia)97.9998.3898.1598.26

Abugabah et al. [39]X-ray (Normal vs. COVID-19 vs. Pneumonia)96.70N/A96.62N/A

Shi et al. [30]CT (COVID-19 vs. CAP)87.9N/A90.70N/A

Qjidaa et al. [70]X-ray (Normal vs. COVID-19 vs. Pneumonia)9898.6698.3398.30

Chhikara et al. [35]X-ray and CT scans (Normal vs. COVID-19 vs. Pneumonia)97.7097.697.697.6

Montalbo [65]X-ray and CT scans (Normal vs. COVID-19 vs. Pneumonia)97.4197.5997.5297.55

Saad et al. [66]X-ray and CT scans (COVID-19 vs. Non-Covid)99.399.7998.899.3

Li et al. [31]CT Scans (COVID-19 vs. CAP vs. Non-Pneumonia)96.3N/A90N/A

Xu et al. [32]CT Scans (COVID-19 vs. Influenza-A vs. Healthy)86.786.986.786.7

Proposed SEL-COVIDNET (Tuned DenseNet121)X-ray and CT scans (COVID-19 vs. No-finding vs. Pneumonia)98.5298.798.598.6
Comparison between the SEL-COVIDNET model and SOTA methods. CAP denotes community-acquired pneumonia.

Conclusions

This study aimed to develop an intelligent method for early COVID-19 identification utilizing chest imaging. Due to the limited availability of PCR and CT testing in some developing nations, we have proposed a DL-based technique for early detection of COVID-19 using chest X-ray/CT images in this work. To do this, we employed transfer learning models for two-class classification (COVID-19 vs. No-finding) and three-class classification (COVID-19 vs. No-finding vs. Pneumonia), utilizing the transfer learning idea. We’ve fine-tuned nine pre-trained architectures, including DenseNet121, VGG19, and InceptionV3, InceptionResNetV2, ResNet50, ResNet101, MobileNetV2, MobileNetV3Small, and MobileNetV3Large. We use the tweaked nine models to create a stacking model that outperforms all other models. On the basis of the findings and analyses from the performed research, this study concludes that the suggested SEL-COVIDNET model can achieve superior results in diagnosing COVID-19 from chest image modalities. The presented model outperformed all other SOTA approaches. In future works, we intend to test our approach in more medical areas, such as dementia and Alzheimer’s disease, and breast cancer. Additionally, we plan to utilize this model to identify COVID-19 infections produced by genetically mutated viruses. We are also going to incorporate salience maps in order to increase the interpretability of the model, which is vital not only for clinical treatment but also for research. We intend to train the model on an intensified dataset comprised of different X-ray images. As a result, the model’s accuracy and generalizability could be improved.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  16 in total

Review 1.  Deep learning for healthcare applications based on physiological signals: A review.

Authors:  Oliver Faust; Yuki Hagiwara; Tan Jen Hong; Oh Shu Lih; U Rajendra Acharya
Journal:  Comput Methods Programs Biomed       Date:  2018-04-11       Impact factor: 5.428

2.  Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks.

Authors:  Ali Narin; Ceren Kaya; Ziynet Pamuk
Journal:  Pattern Anal Appl       Date:  2021-05-09       Impact factor: 2.580

3.  Deep Learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) With CT Images.

Authors:  Ying Song; Shuangjia Zheng; Liang Li; Xiang Zhang; Xiaodong Zhang; Ziwang Huang; Jianwen Chen; Ruixuan Wang; Huiying Zhao; Yutian Chong; Jun Shen; Yunfei Zha; Yuedong Yang
Journal:  IEEE/ACM Trans Comput Biol Bioinform       Date:  2021-12-08       Impact factor: 3.710

4.  Truncated inception net: COVID-19 outbreak screening using chest X-rays.

Authors:  Dipayan Das; K C Santosh; Umapada Pal
Journal:  Phys Eng Sci Med       Date:  2020-06-25

5.  Re-epithelialization and immune cell behaviour in an ex vivo human skin model.

Authors:  Ana Rakita; Nenad Nikolić; Michael Mildner; Johannes Matiasek; Adelheid Elbe-Bürger
Journal:  Sci Rep       Date:  2020-01-08       Impact factor: 4.379

6.  COVID-19 classification using deep feature concatenation technique.

Authors:  Waleed Saad; Wafaa A Shalaby; Mona Shokair; Fathi Abd El-Samie; Moawad Dessouky; Essam Abdellatef
Journal:  J Ambient Intell Humaniz Comput       Date:  2021-03-02

7.  Future IoT tools for COVID-19 contact tracing and prediction: A review of the state-of-the-science.

Authors:  Vicnesh Jahmunah; Vidya K Sudarshan; Shu Lih Oh; Raj Gururajan; Rashmi Gururajan; Xujuan Zhou; Xiaohui Tao; Oliver Faust; Edward J Ciaccio; Kwan Hoong Ng; U Rajendra Acharya
Journal:  Int J Imaging Syst Technol       Date:  2021-02-09       Impact factor: 2.177

8.  Deep Q-networks with web-based survey data for simulating lung cancer intervention prediction and assessment in the elderly: a quantitative study.

Authors:  Songjing Chen; Sizhu Wu
Journal:  BMC Med Inform Decis Mak       Date:  2022-01-04       Impact factor: 2.796

9.  Extracting Possibly Representative COVID-19 Biomarkers from X-ray Images with Deep Learning Approach and Image Data Related to Pulmonary Diseases.

Authors:  Ioannis D Apostolopoulos; Sokratis I Aznaouridis; Mpesiana A Tzani
Journal:  J Med Biol Eng       Date:  2020-05-14       Impact factor: 1.553

10.  A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19).

Authors:  Shuai Wang; Bo Kang; Jinlu Ma; Xianjun Zeng; Mingming Xiao; Jia Guo; Mengjiao Cai; Jingyi Yang; Yaodong Li; Xiangfei Meng; Bo Xu
Journal:  Eur Radiol       Date:  2021-02-24       Impact factor: 5.315

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.