Literature DB >> 35281724

A COMPARATIVE STUDY OF X-RAY AND CT IMAGES IN COVID-19 DETECTION USING IMAGE PROCESSING AND DEEP LEARNING TECHNIQUES.

H Mary Shyni1, E Chitra1.   

Abstract

The deadly coronavirus has not just devastated the lives of millions but has put the entire healthcare system under tremendous pressure. Early diagnosis of COVID-19 plays a significant role in isolating the positive cases and preventing the further spread of the disease. The medical images along with deep learning models provided faster and more accurate results in the detection of COVID-19. This article extensively reviews the recent deep learning techniques for COVID-19 diagnosis. The research articles discussed reveal that Convolutional Neural Network (CNN) is the most popular deep learning algorithm in detecting COVID-19 from medical images. An overview of the necessity of pre-processing the medical images, transfer learning and data augmentation techniques to deal with data scarcity problems, use of pre-trained models to save time and the role of medical images in the automatic detection of COVID-19 are summarized. This article also provides a sensible outlook for the young researchers to develop highly effective CNN models coupled with medical images in the early detection of the disease.
© 2022 The Authors.

Entities:  

Keywords:  CNN; CNN, Convolutional Neural Network; COVID-19 detection; CT images; CT, Computed Tomography; Data Augmentation; Deep Learning; Image processing; X-ray images

Year:  2022        PMID: 35281724      PMCID: PMC8898857          DOI: 10.1016/j.cmpbup.2022.100054

Source DB:  PubMed          Journal:  Comput Methods Programs Biomed Update        ISSN: 2666-9900


Introduction

The seventh coronavirus Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) that infected humans emerged in Wuhan, Hubei province, China in early December 2019 and the outbreak spread rapidly across the world [1]. Because of its effect on health, psychological well-being and the economy, Coronavirus 2019 (COVID-19) has been affirmed as a worldwide public health catastrophe. Sore throat, dry cough, tiredness, fever, difficulty in breathing, loss of smell and disappearance of taste are the major symptoms of COVID-19 [2]. The incubation period may vary between 2 to 14 days from person to person and the disease is likely to be transmitted through contact and breathing droplets [3]. The hasty spread of the mortal virus has plunked down the entire healthcare system under immense pressure. RT-PCR (Reverse Transcription – Polymerase Chain Reaction) is the initial laboratory testing procedure for COVID-19 diagnosis. Coronavirus contains only RNA (Ribonucleic acid) which needs to be converted to DNA (Deoxyribonucleic acid) for amplification which is done by RT-PCR for virus detection. Apart from its advantages, it is time-consuming which may lead to further spread of the disease from the infected person and the deep nasal swabs are troublesome. Early diagnosis of the disease plays a pivotal role in isolating the positive cases in advance and preventing community spread [4]. Since the lung region is the primarily infected area by the virus, medical imaging modalities like X-ray and Computed Tomography (CT) are generally considered in examining the severity of the infection [5, 6]. X-ray imaging techniques are often employed in the diagnosis of COVID-19 due to their wide availability, quick processing time and low cost. But CT imaging techniques are preferred as it carries detailed information of the infected region [7]. However, even for experienced radiologists, predicting the infection from medical images has become a challenging task because of the lack of advanced knowledge about the disease. The medical images along with the deep learning algorithms have become a valuable choice that yields faster and more accurate results in the diagnosis of COVID-19 [8, 9]. Pattern recognition is the primary function of the Convolutional Neural Network (CNN), which is a deep neural network and hence utilized to detect COVID-19 from medical images. The objective of this article is to compile the workflow of the recent research works related to the automated detection of COVID-19 from medical images using deep learning techniques. 85 research articles have been chosen from journals with high impact-factor to frame this paper. The motivation of this article is to provide a comparison of the remarkable characteristics of the recent deep learning methods using X-ray and CT imaging modalities. The comparison made for models with and without data augmentation shows that the models performed better when the datasets are augmented. It also provides a sensible outlook for the young researchers to develop highly effective CNN models coupled with medical images in the early detection of the disease. The rest of this paper is structured as: Section 2 describes the pre-processing techniques to generate high-quality medical images Section 3. presents the dataset split and the strategies to overcome the data scarcity problem Section 4. discusses the often used pre-trained models for COVID-19 detection Section 5. illustrates the architecture and functions of each layer in the Convolutional Neural Network Section 6. presents the role of medical images in COVID-19 early detection Section 7. discusses the binary and multiclass classification of medical images. Finally, Section 8 concludes the article and future research suggestions are provided for the young researchers.

Pre-processing of raw medical images

Medical Imaging is the process of creating images of the interior parts for diagnosing various abnormalities in a human body. X-ray and CT imaging are the two often utilized medical imaging modalities for detecting COVID-19. But due to the low intensity and contrast in images, the borders and edges of the images are not clear which may lead to a false diagnosis of the disease. So there is a strong need to pre-process the medical images to extract the essential information and remove the irrelevant data to increase the accuracy of the model [10]. Medical image processing deals with the application of an algorithm on a digitized image to enhance the image quality of the raw medical data for further analysis. Various pre-processing techniques are used in medical imaging applications to improve the visual information of the input image. Image resizing, image segmentation and image enhancement are the usually performed pre-processing techniques in X-rays and CT scans in COVID-19 diagnosis.

Image Resizing

It is necessary to standardize the dataset as it is acquired from multiple centers and scanners which may vary in size. All the images in the dataset are generalized to a fixed dimension using the image resizing technique for better classification performance of the CNN model [11].

Image Segmentation

Image Segmentation is an essential image processing technique to increase the prediction quality and reliability of the model. Segmentation focuses on the Region of Interest (ROI) and for COVID-19 detection the ROI is the lung region. It reduces the computational complexity by separating the lung region from other background information in the medical images [12]. Fig. 1. shows samples for image segmentation.
Fig. 1

Samples for Image Segmentation

Samples for Image Segmentation

Image Enhancement

Image Enhancement is essential for enhancing the visual perception quality of the medical images for disease diagnosis. Histogram equalization is an enhancement technique that distributes the intensity level over the pixels of the image. In some cases, the information carried by the white pixels is washed out due to the high contrast in the white region [13]. Adaptive Histogram Equalization (AHE) distributes the intensity values only on the small regions of the image. It may result in over-amplification of noise within the homogeneous regions [14]. CLAHE (Contrast Limited Adaptive Histogram Equalization) limits the over enhancement of noise caused by AHE. It enhances the image by fixing a maximum contrast limit beyond which the contrast cannot be improved that prevents over-amplification of noise [15]. Alaa S. Al-Waisy et.al have used CLAHE to enhance the image contrast and improve the visibility of the borders of the chest X-ray images [16]. Samples for image enhancement are shown in fig. 2 .
Fig. 2

Samples for Image Enhancement

Samples for Image Enhancement

Data Sets

Deep learning requires a large amount of data for the model to be trained efficiently and accurately. The data available in the dataset are split into three sets: i) Training dataset ii) Validation dataset iii) Test dataset. The training dataset is used during the learning process to train the model to perform tasks. The validation dataset is used to evaluate, fine-tune model hyperparameters during the training process and facilitates in optimizing model selection. The test dataset is used to assess the model once it is completely trained using the training and the validation dataset [17]. Fig. 3 depicts the typical split of the available dataset. As COVID-19 is an ongoing and new pandemic, the available datasets are insufficient and imbalanced to train the model effectively [18]. To deal with the data scarcity problem two strategies are often used:
Fig. 3

Split of the available dataset

Transfer Learning Data Augmentation Split of the available dataset

Transfer Learning

Transfer Learning is an approach where a neural network model trained for a particular task is reused for a model on another task. For the new task, only the layers that are very close to the output units are retrained. The pre-trained model has to be trained with a sufficient amount of data because it gains knowledge about feature extraction of the image which is going to be transferred to another model [19]. The main application of transfer learning is the classification of medical images for emerging diseases due to the limited availability of samples [20]. Transfer learning has the benefit that the training time of the model decreases and is computationally less expensive as only a few layers are retrained. Since the models are already trained, it does not require a vast amount of data. Fig. 4. illustrates the concept of transfer learning. Here the knowledge gained by model 1, which is trained with a large amount of data is transferred to model 2 to perform a related task when the amount of data available to train model 2 is limited.
Fig. 4

Transfer Learning Concept

Transfer Learning Concept

Data Augmentation

Data augmentation is another strategy that overcomes the data scarcity problem. It increases the number of samples in the dataset by making slight variations in the already existing samples. For instance, Soumya Ranjan Nayak et al augmented the training X-ray images by rotating them by an angle of 5˚ clockwise, scaling them by a measure of 15%, flipping the images horizontally and adding Gaussian noise with a mean of 0 and variance of 0.25 [21]. In their 18-way data augmentation [22] Shui-Hua Wang et al added speckle noise to the training Chest Computed Tomography (CCT) images with a mean of 0 and variance of 0.05. Data augmentation helps to reduce overfitting and serves as a regularizer. Augmenting the dataset can also increase the accuracy of the model. Fig. 5 depicts the accuracy of the models before and after data augmentation.
Fig. 5

Accuracy of the models with and without data augmentation

Accuracy of the models with and without data augmentation In [23] the CNN used for the detection of COVID-19 was trained by the artificial data generated by COVIDGAN (COVID Generative Adversarial Network) which increased the accuracy of the model from 85% to 95%. Muhammad EH Chowdhury et al proposed a technique along with transfer learning and image augmentation for detecting COVID-19 pneumonia from X-ray images. The pre-trained model DenseNet201 outperformed with image augmentation in three-class classification by increasing the accuracy from 95.19% to 97.94% [24]. In [25] a deep CNN architecture was proposed for the classification of environmental sound. The accuracy has been increased from 74% to 79% after training the model with augmented data. Joseph Lemley et al introduced smart augmentation to minimize overfitting and maximize the accuracy of the network. Smart augmentation generates new samples by mixing various samples that have the same label. The accuracy was improved from 88.15% to 89.08% using traditional augmentation and further increased to 95.66% using smart augmentation [26]. In [27] to perform teeth segmentation and classification, a dataset was created by applying image augmentation technique to the dental radiographs which increased the accuracy of the AlexNet model from 88.31% to 98.88%. Table 1 shows the increase in accuracy of the models after data augmentation.
Table 1

Accuracy of models with and without data augmentation

Sl. NoArticle/YearTypeWithout Data AugmentationWith Data Augmentation
DatasetAccuracyDatasetAccuracy
1Abdul Waheed et al/2020COVID-19721 Normal CXR403 COVID-19 CXR85%1399 Normal CXR1669 COVID-19 CXR95%
2Muhammad EH Chowdry/2020COVID-19 Pneumonia &Viral Pneumonia1579 Normal CXR423 COVID-19 CXR1485 viral pneumonia CXR95.19%2274 Normal CXR2128 COVID-19 CXR2138 viral pneumonia CXR97.94%
3Justin Salaman & Juan Pablo Bello/2016Environmental sound classification8732 Sound clips74%43660 Sound clips79%
4Joseph Lemley et al/2017Gender classification4000 Front faces of human subjects88.15%48360 Front faces of human subjects95.66%
5Shahid Khan et al/2020Tooth classification2910 Dental CXR images of individual teeth88.31%80000 Dental CXR images of individual teeth98.88%
Accuracy of models with and without data augmentation Augmentation is applied only to the training dataset and not to the testing dataset [28]. The training images are either position augmented or color augmented. Fig. 6 shows the sample results of data augmentation.
Fig. 6

Sample results of data augmentation

Sample results of data augmentation

Position Augmentation

It is the type of augmentation where the pixel position of the image changes. The most used position augmentation techniques are scaling, flipping, rotation and cropping.

Colour Augmentation

It is the type of augmentation where the color of an image is modified by changing the pixel values of the image. Colour augmentation can be performed by varying the brightness, contrast, saturation or hoe of an image.

Noise Injection

Adding noise to an image also helps in incrementing the dataset. Training the neural network with Gaussian noise performs model regularization which reduces overfitting [29].

Pretrained CNN Models

Pretrained models are the models that are trained with a huge amount of datasets for a specific task. Due to the rapid spread and limited availability of COVID-19 samples, training the CNN models from scratch is a difficult task. Hence pre-trained models are used in most of the COVID-19 detection architectures which saves time and produces higher accuracy. Pretrained models often used in COVID-19 diagnosis architectures are briefly discussed:

VGG

Karen Simonyan and Andrew Zisserman from the University of Oxford proposed Visual Geometry Group (VGG) in the year 2015. The depth of the architecture has been increased by increasing the number of convolutional layers and smaller kernels of size 3 × 3 were also used to improve the performance. VGG 16 and VGG 19 are the two versions of VGG architecture trained using the ImageNet dataset [30].

ResNet 50

Residual Neural Network is a 50-layered network that was trained using the ImageNet dataset developed by Kaiming He et al. Although stacking more convolutional and pooling layers produces better results further addition of these layers above a certain threshold level decreases the performance of the model. To overcome this problem, shortcut connections were added which randomly drops one or more layers without adding extra parameters and computational complexity to the model [31].

DenseNet

This architecture was proposed by Gao Huang et al to ensure maximum flow of information between each layer in the network. It is an improved version of ResNet where each layer in the network is connected in a feed-forward fashion with every other layer. DenseNet concatenates the output feature maps of a layer with the incoming feature maps [32].

Inception V3

In the year 2014, Christian Szegedgy designed Inception Net also known as GoogleNet to deepen and widen the network. Inception modules were introduced which allows the use of multiple kernel sizes at the same level. It is a 22-layered network with limited computational resources [33]. Inception V2 and Inception V3 are improved versions of GoogleNet by adding kernel factorization and batch normalization even with relatively low computational cost with no compromise in quality [34].

Xception

In this architecture proposed by Francois Chollet, the inception modules in the InceptionNet architecture were replaced by depth-wise separable convolutions. The depth-wise separable convolution layers are linearly stacked with residual connections for easy modification of the network. Xception Net has almost the same number of parameters as Inception V3 [35].

MobileNet

This model that also uses depth-wise separable convolutions was developed by Andrew G Howard et al. Two hyperparameters were introduced which allows considering low latency and small-sized models for mobile applications [36]. MobileNet V2 is an improved version of MobileNet where inverted residual and linear bottlenecks were introduced to decrease the number of parameters and memory consumption [37].

Deep Neural Network

A branch of machine learning that is trained on a large amount of data from which it predicts the output for a given input is called deep learning. Neural networks are a set of algorithms designed to recognize patterns. The Convolutional Neural Network (CNN) is a type of Deep Neural Network (DNN) that is frequently employed in a variety of applications.

Convolutional Neural Network

CNNs are primarily used for visual document analysis and to solve different pattern recognition tasks [38, 39]. The use of CNN has been rapidly increased in the medical field and has produced successful results in medical image classification [40]. Different CNN models have been proposed by researchers to detect abnormalities from medical images such as detection of lung nodules [41], prediction of heart disease [42], classification of dental diseases [43], detection of skin diseases [44], prediction of breast cancer [45] and many other diseases. Recently it has been found that CNN plays a crucial role in detecting COVID-19 from medical imaging modalities such as X-ray and Computed Tomography (CT). CNN consists of multiple layers that are responsible for extracting distinguished features from the given image which are transferred to the classification stage.

CNN Architecture

CNN architecture is stacked with three primary layers namely i) Convolutional layer ii) Pooling layer and iii) Fully Connected layer. The basic CNN architecture along with the layers is shown in fig. 7 . The responsibility of Convolutional and pooling layers are feature extraction and the fully connected layer is for classification purposes. Two more parameters dropout layer and activation function are also defined apart from these layers.
Fig. 7

The architecture of Convolutional Neural Network

The architecture of Convolutional Neural Network

Convolutional layer

This is the basic layer responsible for extracting various features from the input image pattern. The convolutional layer has a set of kernels that are nothing but filters. The kernel is a matrix, which is basically smaller than the input data that slides over the input from left to right and top to bottom and performs a dot function with the input data. The resultant is the generation of a feature map. The feature maps of the later layers are built by combining the feature maps of the earlier layers [46].

Pooling layer

The pooling layer is the second layer which follows the convolutional layer. This layer performs downsampling and thereby reduces the number of parameters and computations. Max pooling is the most commonly used pooling operation that produces the maximum element from the feature map.

Fully Connected layer

The last layer in the architecture is the fully connected layer (FC layer). Its dimension is equal to the number of output classes. The input from the preceding stages is fed into the fully connected layer, which then classifies the images.

Dropout layer

It may lead to overfitting when all the features are connected to the FC layer. A dropout layer is used before the output layers which randomly discards a few neurons from the neural network which results in the size reduction of the model [47].

Activation Function

Activation Functions get the output from the previous layer and convert it to a form suitable for the next layer to consider that as its input. They can be used at any part of the network. Without activation functions, the model would not be able to learn the complex patterns from the data. It adds nonlinearity to the network. Rectified Linear Unit (ReLu), softmax, sigmoid and tanh are some of the commonly used activation units and ReLu is most widely used in deep learning models [48, 49].

Role of X-ray and CT in the detection of COVID-19

X-ray and CT are two common medical imaging modalities that are used to diagnose and analyze the severity of an infection. Each medical imaging modality has its advantages and limitations. X-ray imaging is the most often utilized medical imaging tool for the diagnosis of COVID-19 due to its extensive availability [50, 51]. It can be processed by simple procedures and thus reduces the imaging time which minimizes the possibility of spreading the virus [52]. It is economical when compared with other medical imaging modalities. It is non-invasive and produces a low radiation dose when compared to a CT scan. Despite its advantages, X-rays are less sensitive which may lead to a false prediction of the disease with early and mild symptoms [53]. On the other hand, CT scans are highly sensitive and contain detailed information about the affected region and thus provide accurate results. CT imaging plays a major role in the diagnosis of lung abnormalities [54]. It is more reliable which helps in the early diagnosis of COVID-19 [55]. But CT screening limits its usage due to high cost, higher radiation dose and resource constraints.

Detection of COVID-19 from X-ray images

X-ray images have been used by many researchers to train the CNN model for the detection of COVID-19 due to the wide availability of datasets in comparison with other medical imaging techniques. Fig. 8 presents the sample X-ray images of a) Normal cases b) COVID-19 positive cases from the COVIDx dataset. Deep learning techniques applied in recently proposed systems used for the detection of COVID-19 from X-rays are briefly discussed.
Fig. 8

Sample X-ray images of a) Normal cases b) COVID-19 positive cases

Sample X-ray images of a) Normal cases b) COVID-19 positive cases Matias Cam Arellano and Oscar E Ramos [56] used the DenseNet121 pre-trained CNN model whose last layer alone is retrained for detecting COVID-19 from chest radiographs using open databases. As the model is already trained for detecting different lung diseases, the network provided distinctive features with an accuracy of 94.7%. In [57] COVID-19 and pneumonia are detected from chest X-ray images using a three-step process. The Conditional Generative Adversarial Network (C-GAN) is used at the first step to segment the lung region from the CXR images. In the second step, the feature extraction network which is a combination of traditional feature extraction algorithms and deep CNNs extracts the features from the segmented lung images. Various machine learning classifiers were used at the final step to classify the CXR images based on the extracted features. Binary Robust Invariant Scale Key-points (BRISK) in combination with VGG-19 produced the highest classification accuracy of 96.6%. Khandaker Foysal Haque et al [58] developed a CNN model that obtained an accuracy of 97.56% in detecting COVID-19 from CXR images. The proposed model that is trained with four convolutional layers performed better when compared with the one trained with three and five convolutional layers. In [59] a combined CNN-LSTM (Convolutional Neural Network – Long Short Term Memory) was introduced to automatically diagnose COVID-19 from chest X-ray images. The model produced an accuracy of 99.4% which used CNN for extracting the deep features and LSTM for classifying the abnormality. In [60] COVID-CheXNet was proposed to detect COVID-19 from chest X-ray images. The model yielded a very good accuracy of 99.99% by fusing the results generated by two pre-trained CNN models ResNet34 and HRNet. Linda Wang et al [61] introduced a DCNN named COVID-Net which is one of the first open-source architectures for detecting COVID-19 from CXR images. The authors combined five different open-access datasets and created the open-access dataset COVIDx which is tested on COVID-Net, VGG-19 and ResNet 50. COVID-Net performed well with an accuracy of 93.3%. In [62] a DCNN was proposed that used multiple patches of the lungs with random cropping to avoid overfitting. FC-DenseNet-103 performed the segmentation of the lung region and ResNet-18 performed the classification of the disease which produced 91.9% accuracy. Julian et al [63] evaluated the performance of a model using three different pre-processing schemes and obtained a classification accuracy of 91.5%. In [64] Khalid El Asnaoui et al compared seven deep learning models for detecting and classifying COVID-19 pneumonia. The input X-ray and CT images are pre-processed to improve the quality and Inception-ResNet V2 obtained the highest accuracy of 92.18% with data augmentation and transfer learning. In [65] a classification method based on Advanced Squirrel Search Optimization Algorithm (ASSOA) was proposed that used two stages to classify different abnormalities from X-ray images. Pretrained ResNet 50 model is used at the first stage for feature extraction and at the second stage, the proposed ASSOA algorithm is applied for feature selection. Multilayer Perceptron Neural Network (MLP) is used for the classification of infected cases. The accuracy attained using the Kaggle dataset is 99.26% and using the chest X-ray COVID-19 GitHub images is 99.7%. Md Manjural Ahsan et al [66] analyzed the performance of six modified pre-trained models. The obtained accuracy was up to 100% for VGG-16 and MobileNet V2 in identifying the COVID-19 patients. Pradeep Kumar Chaudhary and Ram Bilas Pachori [67] introduced the Fourier-Bessel series expansion-based dyadic decomposition (FBD) where an X-ray image is decomposed into sub-band images. Each sub-band image is fed to the pre-trained ResNet50 model where the deep features are extracted. The extracted features are ensembled and fed to the softmax classifier which classified pneumonia caused by COVID-19 versus other pneumonia with an accuracy of 98.66%. Afshar Shamsi et al [68] proposed a transfer learning based uncertainty aware system for identifying COVID-19 infected cases from X-ray and CT images. Four pre-trained models are used for feature extraction and these features are passed through deep learning models to perform the classification task. The ResNet 50 model along with the SVM (Support Vector Machine) classifier achieved the best results with an accuracy of 87.9%. In [69] a multi-input deep convolutional attention network (MIDCAN) was proposed that was able to handle 3D chest Computed Tomography and 2D chest X-ray images simultaneously. A new convolutional block attention module (CBAM) is included in the model to improve the accuracy of the model to 98.02±1.35%. Chaimae Ouchicha et al [70] developed an advanced tool for the detection of COVID-19 from chest X-ray images named CVDNet. It consists of two parallel columns that have the same structures but different kernel sizes to detect the local and the global features. The output from the two columns is concatenated and the model produced an accuracy of 96.69% .Table 2 shows the accuracy of the discussed models. Comparison and discussion of advantages, limitations and computational complexity of the studied methods using X-ray images are summarized in Table 3 .
Table 2

Accuracy of the models using X-ray images

Sl.NoArticle/YearModelDatasetAccuracy
1Matias Cam Arellano& Oscar E Ramos / 2020DenseNet 121COVID-19 Radiography database & Chest X-ray 1494.7%
2Abhijit Bhattacharya et al / 2021VGG-19 and BRISKCOVID Chest X-ray dataset & Pneumonia dataset96.6%
3Khandaker Foysal Haque et al / 2020Convolutional Neural NetworkGitHub repository & Kaggle repository97.56%
4Md. Zabirul Islam et al / 2020CNN-LSTM NetworkGitHub, Radiopaedia, TCIM, SIRM,Mendeley & Kaggle repository99.4%
5Alaa S. Al-Waisy et al / 2020COVID-CheXNetCOVID 19-vs-normal dataset99.99%
6Linda Wang et al /2020COVID-NetCOVIDx93.3%
7Yujin Oh et al / 2020FC-DenseNet 103ResNet 18JSRT / SCRNLM91.9%
8Julian D Arias – Londono et al / 2020Deep CNN based on COVID-NetHM Hospitales, BIMCV, ACT, China set, Montgomery, Chest X-ray 8, CheXpert, MIMIC91.5%
9Khalid El Asnaoui & Youness Chawki / 2020Inception-ResNet V2Chest X-ray and CT dataset & COVID Chest X-ray dataset92.18%
10El-Sayed M El- Kenawy / 2021ResNet 50ASSOA algorithmKaggle dataset99.26%
GitHub – CXR COVID-19 images99.7%
11Md Manjurul Ahsan et al / 2021VGG 16MobileNet V2Open-source repository Kaggle COVID-19 chest X-ray datasetUpto 100%
12Pradeep Kumar Chaudhary and Ram Bilas Pachori /2020ResNet50COVID chest X-ray dataset & Chest X-Ray Images98.66%
13Afshar Shamsi et al / 2021ResNet 50 & SVM classifierChest X-rayBreast CT datatset87.9%
14Zhang et al / 2021MIDCANCXR & CCT images of normal and COVID affected patients collected from local hospitals.98.02±1.35%
15Ouchicha et al / 2020CVDNetKaggle COVID-19 Radiography Database96.69%
Table 3

Comparison and discussion on the studied methods using X-ray images

Sl.NoArticle/YearAdvantagesLimitationsComputational Complexity
1Matias Cam Arellano& Oscar E Ramos / 2020

The model provided distinctive features as it is already trained for the detection of various lung diseases.

The class imbalance problem is dealt with using the weighted loss function.

The model was trained with less amount of dataset.

Class imbalance of dataset needs to be focussed.

Computationally less expensive as only two layers were added on top of the pre-trained DenseNet 121.

2Abhijit Bhattacharya et al / 2021

Only focussed on the lung region in the X-ray images to provide explicit categorization of images.

Histogram equalization was used to enhance the low contrast X-ray images for prominent training of the CNN model.

The number of images used to train the model is low.

Out of the pre-trained used, DenseNet-201 took the highest time of 2298 seconds to train the model and the simple customized model (sCNN) took the lowest time of 200 seconds to train the model.

3Khandaker Foysal Haque et al / 2020

The proposed sequential CNN model from scratch provided better accuracy when trained with the relevant medical dataset than the pre-trained models that are trained with a generalized ImageNet dataset.

The model was trained with very little resources and time.

The model was trained with a limited dataset.

Due to its simpler architecture, the model is computationally efficient.

4Md. Zabirul Islam et al / 2020

The dataset collection was made available to the general public.

CNN in combination with LSTM provided better classification accuracy than CNN.

Smaller sample size.

Focussed only on posterior-anterior (PA) view of the X-rays, so unable to differentiate other views of the X-ray images.

The X-ray images containing multiple disease symptoms were not classified efficiently.

CNN-LSTM took 18372.0 seconds to train the model which is faster than the time taken to train the normal CNN.

5Alaa S. Al-Waisy et al / 2020

Images are pre-processed to reduce the generalization error and to avoid overfitting.

The performance of DNNs were improved using transfer learning.

The parallel architecture of the two pre-trained models provides a high degree of confidence to radiologists.

Used a limited amount of dataset.

Increased network complexity due to the parallel architecture.

The model was able to diagnose an X-ray image within 2 seconds.

6Linda Wang et al /2020

The authors created an open-source benchmark dataset called COVIDx.

The proposed COVID-Net architecture is publicly available for open access.

Made use of lightweight design patterns.

Used selective long-range connectivity where ever necessary which improved representational capacity and made training easier still maintaining computational complexity and memory efficiency.

Sensitivity needs to be improved to limit the amount of missed COVID-19 cases.

Requires a collection of additional data to generalize the model.

Maintains reduced computational complexity.

7Yujin Oh et al / 2020

To overcome the data scarcity problem, patch-based DNN with random patch cropping was proposed and trained stably with a limited dataset.

Medical resources are saved by utilizing it only for COVID-19 affected patients.

Pre-trained models are used to stabilize training.

The Lung region was extracted from the X-ray images to improve the classification performance.

Well-curated datasets are lacking.

Network complexity and computational time are less due to patch-based training.

8Julian D Arias – Londono et al / 2020

Regularization techniques were used to manage the data imbalance problem.

The performance of the model was improved by pre-processing the data.

Class imbalance problem.

COVID-Net network as a base for the developed model has made it computationally efficient.

9Khalid El Asnaoui & Youness Chawki / 2020

Weight decay & L2 regularizer are used to avoid overfitting.

Intensity normalization & CLAHE were used to eliminate noise and improve the quality of the X-ray images.

To overcome the data scarcity problem and training time, transfer learning models were used.

Usage of a limited number of COVID-19 X-ray images.

Out of the pre-trained models used, VGG-19 took only 53493.08 s for training but achieved the lowest accuracy of 75.5%. Inception-ResNet-V2 required 79184.28 s for training which provided the highest accuracy of 92.18%.

10El-Sayed M El- Kenawy / 2021

Detection cost is decreased significantly.

Transfer learning technique was used.

Dropout was used to avoid overfitting.

Requires an improvement in convergence rates.

Computations are time-consuming due to the dense web and the time taken to classify a new CXR image could be a maximum of 135 seconds.

11Md Manjurul Ahsan et al / 2021

A pre-trained network, trained on a larger dataset was used to work efficiently on small datasets.

Data imbalance problem.

Computationally less expensive as only the layers close to the output units are retrained.

12Pradeep Kumar Chaudhary and Ram Bilas Pachori /2020

FDB used for image decomposition provided better multi-resolution representation.

FBSE performs better analysis of non-stationary signals as it uses Bessel functions.

CLAHE was applied to improve the contrast of the X-ray images.

Transfer learning was used to improve the quality of the deep features.

The image decomposition process requires more amount of time.

The model was trained with less amount of dataset.

Suffers from computational complexity problems due to the long time taken at the image decomposition step.

13Afshar Shamsi et al /2021

Classification task has been made easier using pre-trained networks.

Epidemic uncertainty was calculated for the reliable detection of classes.

The dataset used was unbalanced.

The use of pre-trained models reduced the computational complexity.

14Zhang et al / 2021

Handles chest CT images and chest X-ray images simultaneously.

Multi-way data augmentation is used to avoid overfitting.

Classification performance was improved using attention mechanisms.

The model was trained with less amount of dataset.

Maintains reduced computational complexity.

15Ouchicha et al / 2020

Batch normalization technique is used which improves the convergence during training.

Vanishing gradient problem was solved by the usage of residual network.

Model has been trained on a smaller dataset.

Computationally efficient due to the use of skip connections.

Accuracy of the models using X-ray images Comparison and discussion on the studied methods using X-ray images The model provided distinctive features as it is already trained for the detection of various lung diseases. The class imbalance problem is dealt with using the weighted loss function. The model was trained with less amount of dataset. Class imbalance of dataset needs to be focussed. Computationally less expensive as only two layers were added on top of the pre-trained DenseNet 121. Only focussed on the lung region in the X-ray images to provide explicit categorization of images. Histogram equalization was used to enhance the low contrast X-ray images for prominent training of the CNN model. The number of images used to train the model is low. Out of the pre-trained used, DenseNet-201 took the highest time of 2298 seconds to train the model and the simple customized model (sCNN) took the lowest time of 200 seconds to train the model. The proposed sequential CNN model from scratch provided better accuracy when trained with the relevant medical dataset than the pre-trained models that are trained with a generalized ImageNet dataset. The model was trained with very little resources and time. The model was trained with a limited dataset. Due to its simpler architecture, the model is computationally efficient. The dataset collection was made available to the general public. CNN in combination with LSTM provided better classification accuracy than CNN. Smaller sample size. Focussed only on posterior-anterior (PA) view of the X-rays, so unable to differentiate other views of the X-ray images. The X-ray images containing multiple disease symptoms were not classified efficiently. CNN-LSTM took 18372.0 seconds to train the model which is faster than the time taken to train the normal CNN. Images are pre-processed to reduce the generalization error and to avoid overfitting. The performance of DNNs were improved using transfer learning. The parallel architecture of the two pre-trained models provides a high degree of confidence to radiologists. Used a limited amount of dataset. Increased network complexity due to the parallel architecture. The model was able to diagnose an X-ray image within 2 seconds. The authors created an open-source benchmark dataset called COVIDx. The proposed COVID-Net architecture is publicly available for open access. Made use of lightweight design patterns. Used selective long-range connectivity where ever necessary which improved representational capacity and made training easier still maintaining computational complexity and memory efficiency. Sensitivity needs to be improved to limit the amount of missed COVID-19 cases. Requires a collection of additional data to generalize the model. Maintains reduced computational complexity. To overcome the data scarcity problem, patch-based DNN with random patch cropping was proposed and trained stably with a limited dataset. Medical resources are saved by utilizing it only for COVID-19 affected patients. Pre-trained models are used to stabilize training. The Lung region was extracted from the X-ray images to improve the classification performance. Well-curated datasets are lacking. Network complexity and computational time are less due to patch-based training. Regularization techniques were used to manage the data imbalance problem. The performance of the model was improved by pre-processing the data. Class imbalance problem. COVID-Net network as a base for the developed model has made it computationally efficient. Weight decay & L2 regularizer are used to avoid overfitting. Intensity normalization & CLAHE were used to eliminate noise and improve the quality of the X-ray images. To overcome the data scarcity problem and training time, transfer learning models were used. Usage of a limited number of COVID-19 X-ray images. Out of the pre-trained models used, VGG-19 took only 53493.08 s for training but achieved the lowest accuracy of 75.5%. Inception-ResNet-V2 required 79184.28 s for training which provided the highest accuracy of 92.18%. Detection cost is decreased significantly. Transfer learning technique was used. Dropout was used to avoid overfitting. Requires an improvement in convergence rates. Computations are time-consuming due to the dense web and the time taken to classify a new CXR image could be a maximum of 135 seconds. A pre-trained network, trained on a larger dataset was used to work efficiently on small datasets. Data imbalance problem. Computationally less expensive as only the layers close to the output units are retrained. FDB used for image decomposition provided better multi-resolution representation. FBSE performs better analysis of non-stationary signals as it uses Bessel functions. CLAHE was applied to improve the contrast of the X-ray images. Transfer learning was used to improve the quality of the deep features. The image decomposition process requires more amount of time. The model was trained with less amount of dataset. Suffers from computational complexity problems due to the long time taken at the image decomposition step. Classification task has been made easier using pre-trained networks. Epidemic uncertainty was calculated for the reliable detection of classes. The dataset used was unbalanced. The use of pre-trained models reduced the computational complexity. Handles chest CT images and chest X-ray images simultaneously. Multi-way data augmentation is used to avoid overfitting. Classification performance was improved using attention mechanisms. The model was trained with less amount of dataset. Maintains reduced computational complexity. Batch normalization technique is used which improves the convergence during training. Vanishing gradient problem was solved by the usage of residual network. Model has been trained on a smaller dataset. Computationally efficient due to the use of skip connections.

Detection of COVID-19 from CT images

CT imaging serves as a valuable tool in COVID-19 early detection. It is preferred as it provides a three-dimensional view of the lung which contains detailed information of the affected region. Fig. 9 presents the sample CT images of a) Normal cases b) COVID-19 positive cases from the COVIDx-CT dataset. Several recently proposed deep learning methods have used CT scan as the imaging technique which is discussed briefly.
Fig. 9

Sample CT images of a) Normal cases b) COVID-19 positive cases

Sample CT images of a) Normal cases b) COVID-19 positive cases Xing Wu et al [71] proposed COVID-AL for the diagnosis of COVID-19 from CT images which used pre-trained 2D U-Net for the segmentation of the lung region. The network obtained 95% accuracy with only 30% of the dataset labeled, which reduced the manual labeling cost. Hayden Gunraj et al [72] introduced a deep CNN to detect COVID-19 from CT images named COVIDNet-CT. It used a machine-driven design exploration strategy that automatically identifies the optimal architecture for building a deep CNN. An accuracy of 99.1% is obtained with low computational complexity. Tanvir Mahmud et al [73] proposed a hybrid neural network for COVID-19 early diagnosis and severity prediction from CT scans named CovTANet. The network obtained an accuracy of 95.8% for severity prediction which used a segmentation network called TA-SegNet for lesion segmentation. In [74] a model is developed by integrating two 3D-ResNets. One ResNet is utilized as a binary classifier to determine if the pneumonia is COVID-19 or Interstitial Lung Disease (ILD) from CT images. By stacking the Prior-Attention Residual Learning (PARL) blocks the models are easily trained end to end to achieve an accuracy of 93.3%. Xinggang Wang et al [75] presented a deep learning network named DeCovNet for lesion localization and COVID-19 detection from 3D CT images. The proposed model obtained an accuracy of 90.1% without annotating the COVID-19 lesions in CT images. In [76] ten popular pre-trained CNN models were tested on the proposed artificial intelligence-based CAD system to detect COVID-19 from CT slices. ResNet-101 yielded the best performance with an accuracy of 99.51%. Chun Li et al [77] proposed a method that used transfer learning approach to train the model with limited CT images. This method achieved an accuracy of 87% in severity assessment which utilized pre-trained ChexNet to predict COVID-19 cases. In [78] a DCNN model named ReCOV-101 is proposed which used ResNet-101 as the backbone for identifying COVID-19 from CT scan. Data augmentation and transfer learning are incorporated to increase the dataset and a technique called skip connection is used to go deep into the model which yielded an accuracy of 94.9%. Shuai Wang et al [79] investigated the effectiveness of a deep learning system with modified pre-trained Inception V3 CNN model (M-inception) for the screening of COVID-19 from CT images. Image Processing and feature extraction are performed to improve the algorithm's accuracy to 85.2% in predicting COVID-19. In [80] a method is implemented to extract features from CT scans along with four image filters and the proposed Composite Hybrid Feature Selection (CHFS) model. A Stack Hybrid Classification System (SHC) was used for classifying COVID-19 which produced a better accuracy of 96.07% than using DCNN alone. The accuracy of the discussed models is given in Table 4 . Comparison and discussion of advantages, limitations and computational complexity of the studied methods using CT images are summarized in Table 5 .
Table 4

Accuracy of the models using CT images

Sl.NoArticle/YearModelDatasetAccuracy
1Xing Wu et al / 2020COVID-ALChina Consortium of Chest CT Image Investigation95%
2Hayden Gunraj et al / 2020COVIDNet-CTCOVIDx-CT99.1%
3Tanvir Mahmud et al / 2021CovTANetMosMed dataset95.8%
4Jun Wang et al /2020Two 3D-ResNets (with prior attention)CT scans from several cooperative hospitals93.3%
5Xinggang Wang et al / 2020DeCoVNetCT scans of COVID-19 patients from Picture Archiving and Communication System (PACS) of radiology Department90.1%
6Ali Abbasian Ardakani et al / 2020ResNet-101HRCT images of patients from PACS99.5%
7Chun Li et al / 2021CheXNetCOVID-19 CT Dataset87%
8Varan Singh Rohila et al / 2021ReCOV-101MosMed dataset94.9%
9Shuai Wang et al / 2021M-inception (Modified Inception V3)Images from Xi'an Jiaotong University First Affiliated Hospital (center 1), Nanchang University First Hospital (center 2) and Xi'an No.8 Hospital of Xi'an Medical College (center 3)85.2%
10Ahmed Abdullah Farid et al /2021Four image filters coupled with Composed Hybrid Feature Selection (CHFS) modelOnline access Kaggle Benchmark Dataset96.07%
Table 5

Comparison and discussion on the studied methods using CT images

Sl.NoArticle/YearAdvantagesLimitationsComputational Complexity
1Xing Wu et al / 2020

The manual labeling cost of the dataset was reduced.

A selected subset of CT scans was used to reduce the computational cost.

Lung segmentation was done to minimize the system computation thereby increasing the accuracy.

Requires a combination of clinical information with CT scans to generate more reliable outputs.

Computational cost is reduced by the proper subset selection of CT scans.

2Hayden Gunraj et al / 2020

The authors created the benchmark dataset COVIDx-CT.

The proposed COVIDNet-CT is available as open-source to the general public.

CT images were pre-processed to improve the performance of the model.

COVIDx-CT dataset needs to be expanded to improve the generalizability of the model.

Computational complexity is minimized by the usage of micro-architecture designs.

3Tanvir Mahmud et al / 2021

A tri-level attention mechanism was proposed to improve feature recalibration.

To provide better optimization, various pre-trained backbone networks were incorporated in TA-SegNet.

Performed better even at the early diagnosis phase.

The model was trained with a limited dataset.

High computational complexity due to the hybrid network.

4Jun Wang et al /2020

Image-level labels used by the model make implementation easier.

The degradation problem was addressed by the residual learning blocks.

Lung segmentation was used to improve performance detection.

May fail to detect COVID-19 lesions at an early stage.

Fewer hyperparameters and weak image-level labels make the model implementation easier.

5Xinggang Wang et al / 2020

Requires minimum manual annotation and training the model is easy.

Lightweight and showed better classification performance.

A pre-trained network was used to provide better performance.

Limited number of training samples.

Network design needs to be improved.

Computations are made easier as the model took just 1.93 seconds to classify a new CT image.

6Ali Abbasian Ardakani et al / 2020

The Lung region was focused to improve the performance of the model.

The use of transfer learning has made the training easier.

Requires annotation from expert radiologists.

Computational cost is reduced and model implementation is made easier using pre-trained models.

7Chun Li et al / 2021

Pre-trained models are used to achieve better performance.

The model was trained with limited samples.

Network optimization and network design need to be improved to increase diagnostic accuracy.

Training time should be minimized and network optimization should be sped up to meet the expected computation efficiency.

8Varan Singh Rohila et al / 2021

Skip connections are used to skip the layer that affects the performance of the model.

Segmentation was used to improve the model's reliability.

Transfer learning was used to reduce the convergence time.

Early stopping and regularization were to avoid overfitting.

A limited dataset was used for training.

Comparatively less hardware was utilized as it is trained on a single GPU.

9Shuai Wang et al / 2021

ROI region was focused to improve the model performance.

Transfer learning was used to make the training easier.

A limited dataset was used for training the model.

Low signal-to-noise ratio led to challenging efficacy.

The pre-trained model used reduced the computation cost and made the implementation easier.

10Ahmed Abdullah Farid et al /2021

The reduction of selected features was obtained using four filters.

Multiple classifiers are used for classification to achieve high classification accuracy.

The model was trained with a smaller number of data samples.

The hybrid network has made the model highly computational.

Accuracy of the models using CT images Comparison and discussion on the studied methods using CT images The manual labeling cost of the dataset was reduced. A selected subset of CT scans was used to reduce the computational cost. Lung segmentation was done to minimize the system computation thereby increasing the accuracy. Requires a combination of clinical information with CT scans to generate more reliable outputs. Computational cost is reduced by the proper subset selection of CT scans. The authors created the benchmark dataset COVIDx-CT. The proposed COVIDNet-CT is available as open-source to the general public. CT images were pre-processed to improve the performance of the model. COVIDx-CT dataset needs to be expanded to improve the generalizability of the model. Computational complexity is minimized by the usage of micro-architecture designs. A tri-level attention mechanism was proposed to improve feature recalibration. To provide better optimization, various pre-trained backbone networks were incorporated in TA-SegNet. Performed better even at the early diagnosis phase. The model was trained with a limited dataset. High computational complexity due to the hybrid network. Image-level labels used by the model make implementation easier. The degradation problem was addressed by the residual learning blocks. Lung segmentation was used to improve performance detection. May fail to detect COVID-19 lesions at an early stage. Fewer hyperparameters and weak image-level labels make the model implementation easier. Requires minimum manual annotation and training the model is easy. Lightweight and showed better classification performance. A pre-trained network was used to provide better performance. Limited number of training samples. Network design needs to be improved. Computations are made easier as the model took just 1.93 seconds to classify a new CT image. The Lung region was focused to improve the performance of the model. The use of transfer learning has made the training easier. Requires annotation from expert radiologists. Computational cost is reduced and model implementation is made easier using pre-trained models. Pre-trained models are used to achieve better performance. The model was trained with limited samples. Network optimization and network design need to be improved to increase diagnostic accuracy. Training time should be minimized and network optimization should be sped up to meet the expected computation efficiency. Skip connections are used to skip the layer that affects the performance of the model. Segmentation was used to improve the model's reliability. Transfer learning was used to reduce the convergence time. Early stopping and regularization were to avoid overfitting. A limited dataset was used for training. Comparatively less hardware was utilized as it is trained on a single GPU. ROI region was focused to improve the model performance. Transfer learning was used to make the training easier. A limited dataset was used for training the model. Low signal-to-noise ratio led to challenging efficacy. The pre-trained model used reduced the computation cost and made the implementation easier. The reduction of selected features was obtained using four filters. Multiple classifiers are used for classification to achieve high classification accuracy. The model was trained with a smaller number of data samples. The hybrid network has made the model highly computational. The following research gaps are identified from the related works: Existing models are trained with limited data samples. Limited number of models are proposed for multiclass classification. Limited number of studies have used ultrasound as the imaging modality. No complete real time end-to-end systems are available using deep learning methods.

Binary and Multiclass Classification

Classification in deep learning refers to the process of identifying which category a particular sample belongs to and a classifier is an algorithm that classifies the sample data. Binary classification refers to the classification of samples into two class labels. In the case of detecting COVID-19, binary classification represents whether the sample is COVID-19 positive or COVID-19 negative. But it is inaccurate because other lung diseases can be misclassified as COVID-19. Multiclass classification refers to the classification of samples into more than two class labels. In multiclass classification, the samples might fall under COVID-19, bacterial pneumonia, viral pneumonia or normal cases. Few deep learning models proposed for binary and multiclass classification are discussed briefly. Fig. 10 depicts the accuracy of the models for binary and multi-class classification.
Fig. 10

Accuracy of the models for binary and multi-class classification

Accuracy of the models for binary and multi-class classification Ioannis D Apostolopoules and Tzani A Mpesiana [81] evaluated the effectiveness of the recently developed CNN models used for medical image classification. Identifying various abnormalities from medical images with limited datasets is achieved using a transfer learning procedure which yielded 96.78% accuracy for binary classification and 94.72% accuracy for multiclass classification. Tulin Ozturk et al [82] introduced a deep CNN model named DarkCovidNet for automatically detecting COVID-19 from raw X-ray images. The obtained accuracy for binary classification is 98.08% and for multiclass classification it is 87.02%. In [83] a deep learning model that utilized depthwise convolution was proposed by Tanvir Mahmud et al named CovXNet which efficiently extracted the features from chest x-ray images. The pre-trained convolutional layers are fine-tuned and transferred directly to train the model to detect COVID-19 from the smaller database. For two-class classification 97.4% accuracy is obtained and for multiclass classification the accuracy obtained is 90.02%. Ioannis D Apostopoulos et al [84] employed MobileNet to detect abnormalities from chest X-ray images in three distinct ways. The MobileNet trained from scratch performed better than the other two ways for seven class and binary classification. This method achieved 99.18% accuracy for binary classification and 87.66% accuracy for multiclass classification. In [85] an artificial neural network based on capsule networks named convolutional capsnet was proposed to detect COVID-19 from CXR images using a fewer number of layers. Using this method, the accuracy obtained is 97.24% for binary classification and 84.22% for multiclass classification Table 6 . shows the difference in accuracy obtained by the discussed models for binary and multiclass classification. From the table, it can be observed that the models performed better for binary classification than multiclass classification.
Table 6

Accuracy of models for binary and multiclass classification

Sl.NoArticle/YearModelDatasetAccuracy
Binary classMulti-class
1Ioannis D Apostolopoulos and Tzani A Mpesiana / 2020MobileNet V2GitHub Repository, Radiological Society of North America (RSNA), Radiopaedia and Italian Society of Medical and Interventional Radiology (SIRM)96.78%94.72%
2Tulin Ozturk et al / 2020DarkCovidNetCOVID-19 X-ray image data collection created by Cohen J P, chest X-ray 8 data collection presented by Wang et al98.08%87.02%
3Tanvir Mahmud et al / 2020CovXNetDatabase from Guangzhou Medical center China, Database from Sylhet Medical College Bangladesh97.4%90.2%
4Ioannis D Apostolopoulos et al / 2020MobileNetX-rays from a repository provided by Dr. Cohen, Radiological Society of North America (RSNA), Radiopaedia and Italian Society of Medical and Interventional Radiology (SIRM)99.18%87.66%
5Suat Toraman et al / 2020Convolutional capsnetThe database generated by Cohen, Database generated by Wang97.24%84.22%
Accuracy of models for binary and multiclass classification

Conclusion and Future Directions

There exists a shortage of RT-PCR kits due to the rapid rise in COVID-19 cases. The medical images coupled with deep learning techniques are very helpful to provide faster and more accurate results during the rapid spread of COVID-19. Deep Learning models produce better accuracy when trained with larger datasets. In this article, different image processing techniques have been discussed which enhances the quality of the medical images. The accuracy of the deep learning model can be increased by using high-quality medical images. To enhance the image and predict the disease accurately there is a strong need to pre-process the medical images. As there are no sufficient datasets publicly available to train the model most of the proposed works have used transfer learning and data augmentation strategies to overcome the data scarcity problem. A comparison has been made for a few models with and without data augmentation whose results show that the models performed better when the datasets are augmented. Most of the COVID-19 diagnosis architectures used pre-trained models because of the rapid spread of the disease to save time. VGG, ResNet 50, DeepNet, Inception V3, Xception and MobileNet are the pre-trained models that are often employed in COVID-19 detection. The CNN architecture which is widely used in image classification tasks is discussed. The role of the two medical imaging techniques X-rays and CT in the detection of COVID-19 are described briefly. Though X-ray imaging is simple, less expensive and widely available, CT imaging is highly sensitive in predicting the severity of the disease. Due to the wide availability of the X-ray image dataset in comparison with the CT image dataset, most of the researchers have utilized chest X-ray images for the detection of COVID-19. A comparison has been made for the state-of-the-art methods which could guide the young researchers to find future direction. Models proposed for binary and multiclass classification are studied and observed that the models produced better accuracy for binary classification than multiclass classification. Accuracy, Specificity, Precision, Recall, F1-score, ROC curve (Receiver Operator Characteristic) and AUC (Area Under the Curve) are the common metrics to evaluate the performance of the model. As it is a new pandemic, researchers can validate the models with large datasets in the future. Features obtained from different transfer learning models could be combined to develop hybrid models to obtain better results. So far only a limited number of models are proposed for multiclass classification which yielded less accuracy when compared with binary classification. In the future, researchers can focus on multiclass classification models to improve their efficiency. Weakly supervised deep learning frameworks can reduce the manual labeling cost which could save time and human resources. Only a limited number of studies have used ultrasound as the imaging modality for COVID-19 detection. So the future researchers can work on ultrasound datasets. In the future, researchers can investigate other organs affected by the virus, as the virus infecting the lung region is considered a severe infection stage. Young researchers can build an end-to-end system using deep learning mechanisms which can be utilized in the primary hotspots to identify COVID-19 cases.
  44 in total

1.  An Uncertainty-Aware Transfer Learning-Based Framework for COVID-19 Diagnosis.

Authors:  Afshar Shamsi; Hamzeh Asgharnezhad; Shirin Shamsi Jokandan; Abbas Khosravi; Parham M Kebria; Darius Nahavandi; Saeid Nahavandi; Dipti Srinivasan
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2021-04-02       Impact factor: 10.451

2.  Realizing an Effective COVID-19 Diagnosis System Based on Machine Learning and IoT in Smart Hospital Environment.

Authors:  Karrar Hameed Abdulkareem; Mazin Abed Mohammed; Ahmad Salim; Muhammad Arif; Oana Geman; Deepak Gupta; Ashish Khanna
Journal:  IEEE Internet Things J       Date:  2021-01-11       Impact factor: 10.238

3.  Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets.

Authors:  Yujin Oh; Sangjoon Park; Jong Chul Ye
Journal:  IEEE Trans Med Imaging       Date:  2020-05-08       Impact factor: 10.048

4.  Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis: Actively and Incrementally.

Authors:  Zongwei Zhou; Jae Shin; Lei Zhang; Suryakanth Gurudu; Michael Gotway; Jianming Liang
Journal:  Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit       Date:  2017-11-09

5.  Automated detection of COVID-19 cases using deep neural networks with X-ray images.

Authors:  Tulin Ozturk; Muhammed Talo; Eylul Azra Yildirim; Ulas Baran Baloglu; Ozal Yildirim; U Rajendra Acharya
Journal:  Comput Biol Med       Date:  2020-04-28       Impact factor: 4.589

6.  Diagnosis of COVID-19 using CT scan images and deep learning techniques.

Authors:  Vruddhi Shah; Rinkal Keniya; Akanksha Shridharani; Manav Punjabi; Jainam Shah; Ninad Mehendale
Journal:  Emerg Radiol       Date:  2021-02-01

7.  Extracting Possibly Representative COVID-19 Biomarkers from X-ray Images with Deep Learning Approach and Image Data Related to Pulmonary Diseases.

Authors:  Ioannis D Apostolopoulos; Sokratis I Aznaouridis; Mpesiana A Tzani
Journal:  J Med Biol Eng       Date:  2020-05-14       Impact factor: 1.553

8.  MIDCAN: A multiple input deep convolutional attention network for Covid-19 diagnosis based on chest CT and chest X-ray.

Authors:  Yu-Dong Zhang; Zheng Zhang; Xin Zhang; Shui-Hua Wang
Journal:  Pattern Recognit Lett       Date:  2021-07-14       Impact factor: 3.756

View more
  1 in total

1.  A Deep Learning Approach for the Morphological Recognition of Reactive Lymphocytes in Patients with COVID-19 Infection.

Authors:  José Rodellar; Kevin Barrera; Santiago Alférez; Laura Boldú; Javier Laguna; Angel Molina; Anna Merino
Journal:  Bioengineering (Basel)       Date:  2022-05-23
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.