Literature DB >> 35106060

COVID-19 detection from chest x-ray using MobileNet and residual separable convolution block.

V Santhosh Kumar Tangudu1, Jagadeesh Kakarla1, Isunuri Bala Venkateswarlu1.   

Abstract

A newly emerged coronavirus disease affects the social and economical life of the world. This virus mainly infects the respiratory system and spreads with airborne communication. Several countries witness the serious consequences of the COVID-19 pandemic. Early detection of COVID-19 infection is the critical step to survive a patient from death. The chest radiography examination is the fast and cost-effective way for COVID-19 detection. Several researchers have been motivated to automate COVID-19 detection and diagnosis process using chest x-ray images. However, existing models employ deep networks and are suffering from high training time. This work presents transfer learning and residual separable convolution block for COVID-19 detection. The proposed model utilizes pre-trained MobileNet for binary image classification. The proposed residual separable convolution block has improved the performance of basic MobileNet. Two publicly available datasets COVID5K, and COVIDRD have considered for the evaluation of the proposed model. Our proposed model exhibits superior performance than existing state-of-art and pre-trained models with 99% accuracy on both datasets. We have achieved similar performance on noisy datasets. Moreover, the proposed model outperforms existing pre-trained models with less training time and competitive performance than basic MobileNet. Further, our model is suitable for mobile applications as it uses fewer parameters and lesser training time.
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021.

Entities:  

Year:  2022        PMID: 35106060      PMCID: PMC8794607          DOI: 10.1007/s00500-021-06579-3

Source DB:  PubMed          Journal:  Soft comput        ISSN: 1432-7643            Impact factor:   3.732


Introduction

The newly discovered severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has triggered the latest outbreak, namely coronavirus disease (COVID-19) (Liu and Zhang 2020). The epidemic disease has affected the social and economical life of the world and has spread rapidly within few months. Several countries witness the serious consequences of the COVID-19 pandemic. Recently, COVID-19 has reiterated in few countries as a second wave with incremental growth. World Health Organization (WHO) has reported that globally 227,940,972 confirmed cases of COVID-19, including 4,682,899 deaths till September 2021 (epidemiological 2021). Thus, COVID-19 detection and diagnosis have received a contemporary research task. The COVID-19 disease is a type of pneumonia that infects the respiratory system and spreads through close contact. The isolation of infected patients is the preliminary step to break communal spread. In addition to that appropriate medication can increase the survival of patients from death. The detection of COVID-19 infection becomes a tedious task from the physical symptoms like fever, cold, dyspnea, fatigue, and myalgia (Rousan et al. 2020). The reverse transcription-polymerase chain reaction (RT-PCR) is a traditional clinical test used for the detection of COVID-19 infection. However, long turnaround time and limited availability of testing kits are the major difficulties with RT-PCR tests (Liu et al. 2020). Thus, the researchers have been motivated for the implementation of automatic COVID-19 detection models. A wide variety of models have been reported for COVID-19 detection as follows. Sakib et al. (2020) have generated synthetic chest x-ray images with COVID-19 infection to train the custom CNN model using generic data augmentation and generative adversarial network. Authors have achieved 93.94% of accuracy in COVID-19 detection. Horry et al. (2020) have optimized the VGG19 model for COVID-19 detection from x-ray, Ultrasound, and CT scan images. They have attained 86%, 100%, and 84% of precision on x-ray, Ultrasound, and CT scans, respectively. Abbas et al. (2021) have proposed a DeTraC model that investigates class boundaries using a class decomposition mechanism. An accuracy of 93.1% has been achieved for COVID-19 detection from x-ray images. On the other hand, Erdoğan and Narin (2021) have utilized pre-trained models for feature extraction and the Relieff algorithm for feature selection. They have employed a support vector machine for the classification of cough acoustic signals and achieved 98.4% accuracy. A transfer learning model with a support vector machine has presented by (Narin 2020) for three-class chest image classification. Similarly, feature selection methods from deep features on x-ray images have devised by (Narin 2021) for COVID-19 detection. Nowadays, medical imaging plays a vital role in healthcare services for disease detection (Panayides et al. 2020; Hariri and Narin 2021). Some of the health care applications are brain metastases detection (Dikici et al. 2020), ischemic stroke detection (Kodama et al. 2018), ulcer classification (Goyal et al. 2018), and patient risk prediction (Ju et al. 2020). Recent studies have proved that medical imaging technology can be an alternative to RT-PCR as it is highly sensitive for the diagnosis and screening of COVID-19 (Ng et al. 2020) (Liu et al. 2020). Apart from the clinical tests, radiography examination is the fast and cost-effective test for COVID-19 detection. Moreover, digital x-ray equipment is available in most hospitals and no need for any additional transportation costs. In general, COVID-19 infection can be identified by the examination of multifocal and bilateral ground-glass opacity and/or consolidation (Rousan et al. 2020; Cleverley et al. 2020). Ground glass opacity is referred to as a region of hazy lung radiopacity in chest radiography. The central mediastinum and heart appear white in a normal chest x-ray and the lungs appear black due to air. There is a change in blackness at the lung portion due to denser ground-glass opacity in COIVD infection. Chest x-ray for COVID-19 detection Figure 1a depicts normal chest x-ray finding while Fig. 1b visualizes ground-glass opacity (white arrows) due to COVID-19 infection. Similarly, outlined arrows in Fig. 1b indicates a consolidation of left upper and mid zones of lungs. Thus, the identification of ground-glass opacity and consolidation patterns is essential for COVID-19 detection. Figure 1 also illustrates the effect of COVID-19 on intensity changes near the upper heart region. Thus, the visual examination of these patterns is a challenging task for computer-aided COVID-19 detection systems. Apart from the implementation issues, the class imbalance is one of the significant drawbacks of existing COVID-19 datasets. An insufficient number of COVID-19 positive samples and progressive updates of datasets are major concerns about COVID-19 datasets. The existing proposals have reported their results on balanced datasets having a limited number of COVID-19 positive samples. Thus, we have considered two publicly available chest x-ray datasets for COVID-19 detection. The datasets have been chosen such that one dataset consists of data imbalance and the other dataset has balanced data. Details of the two datasets are as follows.
Fig. 1

Chest x-ray for COVID-19 detection

COVID-XRay-5K (COVID5K) has created by Minaee et al. (Minaee et al. 2020) and has recently updated with 5184 chest x-ray images. This dataset can be used for binary classification as it contains 5000 normal chest x-ray images and 184 COVID-19 positive images. This dataset exhibits data imbalance with a huge number of COVID negative chest x-ray images. Covid radiography (COVIDRD) database of chest x-ray images has recently published by Kaggle (Kaggle covid-19 2021). This dataset consists of three classes including normal (1341), COVID-19 (1200), and viral pneumonia (1345). As our objective is COVID-19 detection and hence we have considered only normal and COVID-19 positive chest x-ray images. This dataset presents the balanced positive and negative chest x-ray images. Review of COVID-19 detection methods The remaining paper has organized as follows; Sect. 2 presents the literature review. Section 3 elaborates details of the proposed methodology. The quantitative analysis of the proposed model has discussed in Sects. 4 and  5 presents the conclusions of the paper.

Literature review

The COVID-19 detection from chest x-ray images has become a contemporary research task due to implementation and dataset issues. The deep learning models are popular and successful for image classification. In this section, we have reported the literature review of COVID-19 detection (binary) models. Narin et al. (2021a) have employed pre-trained ResNet50 model for the three binary classification tasks including normal/COVID-19, normal/viral pneumonia and normal/bacterial pneumonia. Maghdid et al. (2021) have utilized a modified pre-trained AlexNet model for COVID-19 detection. Jaiswal et al. (2020) have proposed COVIDPEN which is a pruned EfficientNet-based model for COVID-19 detection. Minaee et al. (2020) have presented Deep-COVID using deep transfer learning for prediction of COVID-19. Heidari et al. (2020) have performed histogram equalization and bilateral low-pass filter as pre-processing. Then, the classification results have obtained using a transfer learning-based convolutional neural network model. Hemdan et al. (2020) have proposed a COVIDX-Net using modified VGG19 model for COVID-19 detection. Afshar et al. (2020) have implemented a framework known as COVID-CAPS which is based on a capsule network for COVID-19 detection. Table 1 lists the summary of existing models for binary classification of COVID-19 cases. The implementation of an effective COVID-19 detection system is still a challenging task due to the recent spreading trend of the COVID-19 (Sakib et al. 2020).
Table 1

Review of COVID-19 detection methods

ReferencesYearMethodSens.%Spec.%Acc.%
(Narin et al. 2021a)2021Pre-trained models98.296.1
Maghdid et al. (2021)2021TL and AlexNet7210094.1
Jaiswal et al. (2020)2020TL and DenseNet20199.82
Minaee et al. (2020)2020TL and ResNet189890.7
Heidari et al. (2020)2020TL and VGG169810094.5
Hemdan et al. (2020)2020COVIDX-Net10080
Afshar et al. (2020)2020COVID-CAPS98.3
Moreover, the implementation of time-efficient models with better performance is another objective for the COVID-19 detection models. It motivated us to implement a time-efficient COVID-19 detection model that can work on two datasets. The major contributions of the work are as follows. Implementation of a time-efficient generalized model for COVID-19 detection that can work on two datasets. The proposed residual separable convolution block improves the performance of the basic MobileNet model. Our proposed model is compatible with the Mobile vision application as it uses MobileNet and produces similar performance with reduced input sizes.

Proposed methodology

Recently, deep neural networks have been established as successful hands-on models for image classification due to the availability of a huge Imagenet dataset. MobileNet, ResNet, GoogleNet, and Inception ResNet are the most popular models for classification. However, there is a scarcity of large medical image datasets like Imagenet for medical image classification. It motivated researchers to employ transfer learning to improve medical image classification performance. In this section, we have presented the transfer learning procedure along with implementation details of the proposed model.

Transfer learning

Transfer learning involves sharing of weights or knowledge extracted from one problem domain to solve other related problems. High accuracy can be achieved with transfer learning when problems are closely related. The existing pre-trained models are trained with the Imagenet dataset and can perform thousand of class classification. It needs customization of the output layer to handle binary classification. Figure 2a visualizes a pre-trained model for COVID-19 detection and the implementation steps are as follows. The pre-trained model obtained from the above process needs training with medical image data. Thus, the models are trained with chest x-ray image datasets for the detection of COVID-19 infection.
Fig. 2

Block diagram of the proposed model

Firstly, a pre-trained model is assigned with Imagenet weights and then output layers are removed to customize the output layer. Multi-dimensional feature map obtained from the above pre-trained model is then flattened to generate a one-dimensional feature vector. A softmax layer with two neurons is employed to produce the final classification results from the above one-dimensional feature vector. Block diagram of the proposed model

Proposed residual separable convolution block

In our experiments, we have observed that GoogleNet performs well on both datasets. On the other hand, Inception ResNet exhibits the worst performance on the COVIDRD dataset due to the insufficient size of the dataset. However, MobileNet and ResNet50 produce average performance due to ample convolutions. MobileNet (Howard et al. 2017) is the popular time-efficient model designed especially for mobile vision applications. Our objective is to design a fast COVID-19 detection and hence we have considered pre-trained MobileNet. We have proposed a residual separable convolution (RSC) block as shown in Fig. 2b to improve MobileNet performance. In this process, we have replaced flatten layer of the pre-trained MobileNet model with an RSC block. The resulting model is referred to as MobileNet and residual separable convolution block (MNRSC) model. The proposed RSC block uses two separable convolution layers, a global average pooling, a dense layer, and a dropout layer to enhance the spatial feature vectors. We have devised separable convolution with a factored residual connections to reiterate feature maps. Then a global pooling layer converts the multi-dimensional spatial features into a one-dimensional feature vector. In general, the global pooling acts as flatten layer and also avoids over-fitting (Lin et al. 2014). A dense layer with 512 neurons has been employed to establish a traditional neural network. In addition to global average pooling, we have utilized a dropout layer after dense layer to further reduce over-fitting due to class imbalance. The proposed MNRSC model has been designed using MobileNet and an RSC block. The steps involved in the implementation of MNRSC are as follows. Pre-trained MobileNet: It accepts input image of size (2242243) and produces feature vector of size (771024). RSC block: It transforms into vector as follows. Two successive separable convolutions (SC2D) with 512 filters having kernel size of (11) are performed to generate feature maps of and having dimensions as (77512). Now, factored is added to to produce feature vector of size (77512). This step invokes factored residual connection to avoid degradation problem of neurons. = Global average pooling has been employed to generate feature vector of size 512. It maps a three-dimensional vector into a one-dimensional vector. Finally, a dense layer with 512 neurons forms a fully connected neural network to produce feature vector of size 512. Classification: Finally, a softmax layer with two neurons has been utilized for binary chest x-ray classification as Normal or COVID-19.

Results and discussion

In this section, we have presented a detailed analysis of the proposed model using various parameters including input size, and noisy dataset. Four popular fine-tuned models including MobileNet, InceptionResNet, GoogleNet, and ResNet have been considered for the evaluation. The proposed model has been compared with the recent COVID-19 detection model including Minaee et al. (2020) and Maghdid et al. (2021).

Experimental setup

We have evaluated the proposed model using four vital performance metrics like accuracy, sensitivity, specificity, and jaccard similarity. Sensitivity measures the true positive rate while specificity calculates the true negative rate. Accuracy and jaccard similarity focus on overall classification performance. The proposed model and other pre-trained models have implemented using python and tensorflow. All experiments have been conducted on an Intel Xeon processor with a 25 GB GPU system. We have trained the proposed and pre-trained models using Adam optimizer with an initial learning rate of 0.0001. Table 2 lists out the complete hyper-parameter setup utilized for the experiments.
Table 2

Hyperparameter setup

HyperparameterValue
Batch size8
OptimizerAdam
Initial learning rate0.0001
Number of epochs10
Hyperparameter setup Fivefold cross-validation on COVID5K dataset Training and validation loss on COVID5K dataset

Evaluation on COVID5K dataset

COVID-Xray-5K (COVID5K) dataset is the publicly available dataset published by Minaee et al. Minaee et al. (2020). It consists of 5184 chest x-ray images. The proposed model has evaluated on the dataset and Table 3 lists out cross-validation results. The proposed MNRSC model has achieved 100% accuracy in the firstfold and 99% in other folds. Similar performance has observed in other metrics. This table also manifests that the proposed model procures a mean accuracy of 99% with a standard deviation of 0.2%. Figure 3 depicts training and validation loss of the proposed MNRSC model in the fourthfold. The proposed MNRSC model exhibits a high validation loss in the first threefold and attains fast convergence within ten epochs due to the use of Adam optimizer. Figure 3 demonstrates that the validation loss becomes consistent after the sixthfold. The proposed model achieves consistent loss due to the deployment of a residual separable convolution block.
Table 3

Fivefold cross-validation on COVID5K dataset

FoldAccuracySensitivitySpecificityJaccard
1100.00100.00100.00100.00
299.8699.8699.8599.72
399.5299.5299.6599.08
499.6699.6699.8099.34
599.6199.6199.6099.27
Avg.99.7399.7399.7899.48
Std.0.200.200.160.37
Fig. 3

Training and validation loss on COVID5K dataset

Figure 4 visualizes the confusion matrix obtained by the proposed model on COVID5K dataset. From this figure, it can be observed that only two COVID-19 positive samples have been wrongly classified. Whereas, six COVID-19 negative samples are classified wrongly on the COVID5K dataset.
Fig. 4

Confusion matrix on COVID5K dataset

Confusion matrix on COVID5K dataset Sample x-ray images along with actual and predicted labels Performance comparison on COVID5K dataset We have investigated the reasons behind the behavior of the proposed model. Table 4 lists out three x-ray image samples from COVID5K dataset. We have found that our proposed model is ineffective with dis-oriented images and can be observed from samples (3) of Table 4. In these samples, other parts like the abdomen have been included due to the wrong orientation. However, our model classifies correctly in case of other images.
Table 4

Sample x-ray images along with actual and predicted labels

COVID5K
(1)(2)(3)
Actual010
Predicted011
Comparison with state-of-art model

Comparison with state-of-art model

The proposed model has been compared with the recent COVID-19 detection models. Authors have reported sensitivity and specificity and hence we also have considered the same metrics for comparison. Table 5 lists out the performance comparison with the state-of-art models. The proposed MNRSC model outperforms the existing model with 99% of sensitivity and specificity. Our model achieves an improvement of 1% in sensitivity and 9% in sensitivity than Minaee et al..
Table 5

Comparison with state-of-art model

ReferencesSensitivitySpecificity
Minaee et al. (2020)98.0090.70
Maghdid et al. (2021)98.7799.02
MNRSC99.7399.78
Sample noise image generation Comparison of ROC curves

Comparison with pre-trained models

We have identified four best-performing pre-trained models including GoogleNet, Inception ResNet, MobileNet, and ResNet50, whose accuracy is greater than 98%. These models have produced similar performance on both datasets after fine-tuning. The input size is the primary factor that influences the computational cost and performance. The low-resolution images are preferred for fast computation especially for time-constrained applications like mobile applications. Thus, we have started our experiments with various input sizes including (224224) and (128128) for the evaluation. The detailed analysis with various input sizes is as follows. Performance with input image size (224224): Figure 5a depicts comparison of four metrics with input size (224224). This figure visualizes that the proposed model outperforms existing pre-trained models in accuracy, sensitivity, and jaccard. The proposed model reports lower specificity than GoogleNet and similar performance on other models.
Fig. 5

Performance comparison on COVID5K dataset

Performance with input image size (128128): Figure 5b compares the results with input size (128128) and the proposed model exhibits superior performance on the COVID5K dataset in all metrics except sensitivity. The proposed MNRSC model fails to attain better sensitivity due to data imbalance. Comparison on noisy COVID5K dataset Bold indicates best score ROC curve visualizes the plot between true positive rate vs false positive rate. Figure 6 visualizes a comparison of ROC curves among the proposed model and its competitive models. The existing pre-trained models report lesser performance on the COVID5K dataset as shown in Fig. 6. However, our model exhibits the best characteristics of its competitive models.
Fig. 6

Comparison of ROC curves

Performance analysis on noisy data

In general, noise is another image artifact that influences the performance of the model. Thus, we have evaluated the proposed model on noisy datasets. Medical images suffer from random noise and hence we have imposed the random Gaussian noise on the dataset. We have created a noise image having zero mean with a standard deviation of 5. Table 6 lists out sample noise images along with their original and random noise images. If I(w, h, c) is image, is random noise function, then the noise image NI(w, h, c) can be obtained using eq. 1.Table 7 compares the results of the proposed model with its competing models on the noisy COVID5K dataset. The proposed MNRSC model has acquired second with 99.71% accuracy.
Table 6

Sample noise image generation

(a) Original(b) Random noise(c) Noise image
Table 7

Comparison on noisy COVID5K dataset

ModelAccuracySensitivitySpecificityJaccard
GoogleNet99.7199.7199.9499.43
InceptionResNet99.1799.1799.8498.38
ResNet5099.8699.8699.8899.73
MobileNet99.3299.3299.9698.66
MNRSC99.7199.7199.8899.43

Bold indicates best score

Training and validation loss on COVIDRD dataset

Empirical time complexity

Training time is another vital factor that needs to be considered while designing effective deep networks. In general, training time mainly depends on the number of parameters and training images. In pre-trained MobileNet, the softmax layer receives a vector of size 50176 (771024) which can be treated as neurons for decision making. A softmax layer maps these 50176 neurons into two neurons for binary classification. In the proposed model, a softmax layer receives only 512 neurons as the RSC block transforms (771024) vector into 512 neurons. The proposed model uses 0.3 M additional parameters than basic MobileNet. Table 8 evidence that the proposed MNRSC model outperforms its competitive models other than the MobileNet model. However, the proposed MNRSC model takes an additional training time of 4 sec. on COIVD5K than MobileNet.
Table 8

Number of parameters and training time

Model# ParametersTraining time (sec.)
GoogleNet21.905 M101.5
InceptionResNet54.413 M234.2
ResNet5023.788 M118.2
MobileNet3.329 M89.9
MNRSC3.626 M93.5
Number of parameters and training time

Evaluation on COVIDRD dataset

Covid radiography (COVIDRD) is another dataset published by Kaggle (Kaggle covid-19 2021). Table 9 lists out the cross-validation results of the proposed model on the dataset. This table reveals that the proposed model attains 100% specificity in the first fourfold and 99.81% in the fifthfold. We have achieved a consistent performance of 99% for other metrics including accuracy, sensitivity, and jaccard.
Table 9

Fivefold cross-validation on COVIDRD dataset

FoldAccuracySensitivitySpecificityJaccard
199.7199.71100.0099.41
299.8099.80100.0099.61
399.8099.80100.0099.61
499.9099.90100.0099.80
599.6199.6199.8199.22
Avg.99.7699.7699.9699.53
Std.0.110.110.080.22
Fivefold cross-validation on COVIDRD dataset Figure 7 visualizes the training and validation loss on the COVIDRD dataset. The proposed model reports consistent loss after the sixth epoch and attains optimal loss within ten epochs. The proposed model has classified seven COVID-19 positive samples wrongly as negative on the COVIDRD dataset as shown in Fig. 8. However, our model has predicted all COVID-19 negatives samples correctly. Table 10 lists out three x-ray image samples from the COVIDRD dataset. Our model fails to detect dis-oriented images as shown in the sample (3) of Table 10. In this image head of the patient has included due to the wrong orientation.
Fig. 7

Training and validation loss on COVIDRD dataset

Fig. 8

Confusion matrix on COVIDRD dataset

Table 10

Sample x-ray images along with actual and predicted labels

COVIDRD
(1)(2)(3)
Actual011
Predicted010
Table 11 lists out the performance comparison with the Deep-COVID model. The Deep-COVID model attains 98.29%, and 98.02% of sensitivity, and specificity of respectively. The proposed MNRSC model outperforms the existing model with 99% of sensitivity and specificity. It proves that our model achieves an improvement of 1% in both sensitivity and specificity. In medical image classification, even 1% improvement is considered as a significant performance.
Table 11

Comparison with state-of-art model

ReferencesSensitivitySpecificity
Minaee et al. (2020)98.2998.02
Maghdid et al. (2021)96.1697.49
MNRSC99.7699.96
Confusion matrix on COVIDRD dataset Sample x-ray images along with actual and predicted labels Comparison with state-of-art model Performance comparison with existing pre-trained models is as follows. Performance with input image size (224224): Figure 9a depicts comparison of four metrics with input size (224224). This figure visualize that the proposed model outperforms existing pre-trained models in all metrics. This figures also highlight that there is a significant improvement of 7% than its backbone MobileNet in Jaccard similarity.
Fig. 9

Performance comparison on COVIDRD dataset

Performance with input image size (128128): Figure 9b compares the results with input size (128128) and the proposed model exhibits superior performance on COVIDRD dataset in all metrics. Performance comparison on COVIDRD dataset Figure 10 visualizes comparison of ROC curves among the proposed model and its competitive models. The proposed model exhibits similar performance as pre-trained models on the COVIDRD dataset as shown in Fig. 10.
Fig. 10

Comparison of ROC curves

Comparison of ROC curves

Performance analysis on noisy datasets

Our model outperforms its competing models on a noisy COVIDRD dataset with 99.65% accuracy and can be observed from Table 12.
Table 12

Comparison on noisy COVIDRD datasets

ModelAccuracySensitivitySpecificityJaccard
GoogleNet99.5399.5399.7099.06
InceptionResNet99.6199.6199.7099.22
ResNet5099.5399.5399.7899.06
MobileNet99.2599.2598.9698.52
MNRSC99.6599.65100.099.29

Bold indicates best score

Comparison on noisy COVIDRD datasets Bold indicates best score Table 13 shows that the proposed MNRSC model outperforms its competitive models other than the MobileNet model. However, the proposed MNRSC model takes an additional training time of 2 sec. on COIVDRD dataset than MobileNet.
Table 13

Number of parameters and training time

Model# ParametersTraining time (sec.)
GoogleNet21.905 M51.0
InceptionResNet54.413 M116.5
ResNet5023.788 M58.9
MobileNet3.329 M44.5
MNRSC3.626 M46.7
Number of parameters and training time Convolution results at first layer

Discussion

We have analyzed high-level feature maps of the proposed model. Figure 11a and b depict convolution results after the first layer on Normal and COVID-19 image samples, respectively. These figures visualize the structural, edge, and contrast features of the images. Consider the feature map in the first-row seventh column of Fig. 11. Where the structure of the skeleton and heart has degenerated. Thus, these samples have resulted in the wrong classification. The proposed model exhibits better sensitivity on COVID5K and better specificity on the COVIDRD dataset. However, the difference between sensitivity and specificity is negligible and hence our model exhibits optimal performance. Moreover, our model produces the best results on a balanced dataset like COVIDRD. On the other hand, it achieves competitive performance on imbalanced datasets like COVID5K. The proposed residual separable convolution block over MobileNet achieves significant performance on both datasets. Further, the following conclusions can be made from the above analysis.
Fig. 11

Convolution results at first layer

Our model produces better results on a balanced dataset and competitive results on an imbalanced dataset. Our model exhibits superior performance in accuracy, sensitivity, and Jaccard on the COVID5K dataset. Our model outperforms existing models in all key metrics on the COVIDRD dataset. It also reports consistent results with various input sizes and hence it is compatible with low-scale devices like mobiles. The proposed model attains a 7% improvement in jaccard similarity than the MobileNet model on the COVIDRD dataset.

Conclusion

COVID-19 detection from x-ray images has become contemporary research due to an increase in the number of COVID-19 cases and imbalanced datasets. However, the implementation of time-efficient models with better performance is still challenging. In this work, we have proposed a MobileNet and residual separable convolution block model for chest x-ray image classification. The proposed residual separable convolution block uses two separable convolutions, global average pooling, a dense layer, and a dropout layer. The separable convolutions with a factored residual connection have been utilized to take advantage of computational cost and feature enhancement. Global average pooling has been devised instead of Flatten layer to consider image-level features. Two publicly available datasets have been considered for the performance evaluation of the proposed model. The proposed model outperforms existing pre-trained models and state-of-art models with 99% accuracy in all key metrics except specificity. However, the difference between sensitivity and specificity is negligible and hence our model exhibits optimal performance. The proposed model achieves similar results on noisy datasets. Our proposed model incurs less training time than the existing pre-trained model and exhibits competitive performance as basic MobileNet. Further, the proposed model is compatible with mobile applications as it uses fewer parameters and lesser training time. The proposed model fails to exhibit superior performance on the noisy dataset with imbalanced data. In our future work, we have planned to design an efficient COVID-19 detection model for low quality and noisy datasets.
  18 in total

1.  The role of chest radiography in confirming covid-19 pneumonia.

Authors:  Joanne Cleverley; James Piper; Melvyn M Jones
Journal:  BMJ       Date:  2020-07-16

2.  Ischemic Stroke Detection by Analyzing Heart Rate Variability in Rat Middle Cerebral Artery Occlusion Model.

Authors:  Tomonobu Kodama; Keisuke Kamata; Koichi Fujiwara; Manabu Kano; Toshitaka Yamakawa; Ichiro Yuki; Yuichi Murayama
Journal:  IEEE Trans Neural Syst Rehabil Eng       Date:  2018-06       Impact factor: 3.802

3.  DL-CRC: Deep Learning-Based Chest Radiograph Classification for COVID-19 Detection: A Novel Approach.

Authors:  Sadman Sakib; Tahrat Tazrin; Mostafa M Fouda; Zubair Md Fadlullah; Mohsen Guizani
Journal:  IEEE Access       Date:  2020-09-18       Impact factor: 3.367

4.  COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data.

Authors:  Michael J Horry; Subrata Chakraborty; Manoranjan Paul; Anwaar Ulhaq; Biswajeet Pradhan; Manas Saha; Nagesh Shukla
Journal:  IEEE Access       Date:  2020-08-14       Impact factor: 3.367

5.  Deep neural networks for COVID-19 detection and diagnosis using images and acoustic-based techniques: a recent review.

Authors:  Walid Hariri; Ali Narin
Journal:  Soft comput       Date:  2021-08-24       Impact factor: 3.732

6.  Automated Brain Metastases Detection Framework for T1-Weighted Contrast-Enhanced 3D MRI.

Authors:  Engin Dikici; John L Ryu; Mutlu Demirer; Matthew Bigelow; Richard D White; Wayne Slone; Barbaros Selnur Erdal; Luciano M Prevedello
Journal:  IEEE J Biomed Health Inform       Date:  2020-03-23       Impact factor: 5.772

7.  Imaging Profile of the COVID-19 Infection: Radiologic Findings and Literature Review.

Authors:  Ming-Yen Ng; Elaine Y P Lee; Jin Yang; Fangfang Yang; Xia Li; Hongxia Wang; Macy Mei-Sze Lui; Christine Shing-Yen Lo; Barry Leung; Pek-Lan Khong; Christopher Kim-Ming Hui; Kwok-Yung Yuen; Michael D Kuo
Journal:  Radiol Cardiothorac Imaging       Date:  2020-02-13

8.  Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network.

Authors:  Asmaa Abbas; Mohammed M Abdelsamea; Mohamed Medhat Gaber
Journal:  Appl Intell (Dordr)       Date:  2020-09-05       Impact factor: 5.019

9.  Clinical and CT imaging features of the COVID-19 pneumonia: Focus on pregnant women and children.

Authors:  Huanhuan Liu; Fang Liu; Jinning Li; Tingting Zhang; Dengbin Wang; Weishun Lan
Journal:  J Infect       Date:  2020-03-21       Impact factor: 6.072

View more
  1 in total

1.  COVID-19 Diagnosis on Chest Radiographs with Enhanced Deep Neural Networks.

Authors:  Chin Poo Lee; Kian Ming Lim
Journal:  Diagnostics (Basel)       Date:  2022-07-29
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.