Literature DB >> 33994667

COVID-19 Classification Based on Deep Convolution Neural Network Over a Wireless Network.

Wafaa A Shalaby1, Waleed Saad1,2, Mona Shokair1, Fathi E Abd El-Samie1,3, Moawad I Dessouky1.   

Abstract

Corona Virus Disease 19 (COVID-19) firstly spread in China since December 2019. Then, it spread at a high rate around the world. Therefore, rapid diagnosis of COVID-19 has become a very hot research topic. One of the possible diagnostic tools is to use a deep convolution neural network (DCNN) to classify patient images. Chest X-ray is one of the most widely-used imaging techniques for classifying COVID-19 cases. This paper presents a proposed wireless communication and classification system for X-ray images to detect COVID-19 cases. Different modulation techniques are compared to select the most reliable one with less required bandwidth. The proposed DCNN architecture consists of deep feature extraction and classification layers. Firstly, the proposed DCNN hyper-parameters are adjusted in the training phase. Then, the tuned hyper-parameters are utilized in the testing phase. These hyper-parameters are the optimization algorithm, the learning rate, the mini-batch size and the number of epochs. From simulation results, the proposed scheme outperforms other related pre-trained networks. The performance metrics are accuracy, loss, confusion matrix, sensitivity, precision, F 1 score, specificity, Receiver Operating Characteristic (ROC) curve, and Area Under the Curve (AUC). The proposed scheme achieves a high accuracy of 97.8 %, a specificity of 98.5 %, and an AUC of 98.9 %.
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.

Entities:  

Keywords:  COVID-19; Convolution neural network; Feature extraction; Wireless communications

Year:  2021        PMID: 33994667      PMCID: PMC8112225          DOI: 10.1007/s11277-021-08523-y

Source DB:  PubMed          Journal:  Wirel Pers Commun        ISSN: 0929-6212            Impact factor:   1.671


Introduction

COVID-19 is a respiratory disease that spreads with high frequency around the whole world [1]. The number of infected cases is daily increasing nearly in all countries according to the updated data of the World Health Organization (WHO) [2]. Fever, cough, shortness of breath, sore throat and headache are the most important symptoms of COVID-19 [3]. It transfers from person to another by spreading the droplet coughed or by touching the contaminated surfaces [4]. There are several ways for diagnosing COVID-19 such as blood PCR test [5], but it is expensive. Moreover, it is time-consuming, and it is not suitable due to the rapid spread of the disease. Another way is the COVID-19 detection from chest X-ray images [6, 7]. This is attributed to the fact that the corona virus affects the lung. Hence, the effects of the disease can be diagnosed using X-ray image examination by radiologists. Additionally, different deep learning techniques are used to detect COVID-19 cases using X-ray images in less time with higher accuracy than that achieved with radiologists [8, 9]. Hence, early diagnosis, early case isolation and reduction of virus spreading can be achieved. The DCNN is an example of deep learning techniques. It depends on a gradient descent algorithm during training until reaching the optimum solution [10]. There are several pre-trained CNNs such as Resnet18 [11], VGG-16 [12], GoogleNet [13], Xception [14], ResNet50 [15], DenseNet-121 [16], and Alexnet [17]. Also, deep transfer learning can be used to update weights to minimize the training time [18]. The concept of deep transfer learning comes from the fact that a deep learning network can be used with different input images for classification applications. In this paper, a wireless system for COVID-19 detection from chest X-ray images is suggested. In this system, the sensed X-ray images for the patients are compressed through a resizing strategy, and then modulated using a reliable digital modulation technique. At the receiver side, after the signal is demodulated and image pre-processing functions are performed, the deep features are extracted from the images using an efficient DCNN. Firstly, the system enters the training phase to adjust the proposed DCNN hyper-parameters. Thereafter, the testing phase is applied with the tuned hyper-parameters. Extensive simulation experiments are implemented to study the proposed system performance. From the results, the proposed system outperforms all compared related networks. The proposed classification scheme achieves a high accuracy of classification of 97.7 %, a sensitivity of 98.4 % and an AUC of 98.8 %. The main contributions of the paper can be summarized as follows:The rest of the paper is organized as follows. Section 2 illustrates the basic concepts of deep learning and deep transfer learning. The basic model of the proposed system is discussed in Sect. 3. The performance metrics are introduced in Sect. 4. Simulation results are discussed in Sect. 5. Finally, conclusions are presented in Sect. 6. Introducing a wireless system for detecting COVID-19 cases from X-ray images based on DCNN. Suggesting a DCNN structure for deep feature extraction from X-ray images. Adjusting the hyper-parameters of the proposed DCNN model. Therefore, the best performance of the system can be achieved. Testing different digital modulation techniques for X-ray image transmission through the wireless channel. Executing various experiments to compare the performance of the proposed system with those of other related works.

Convolution Neural Network (CNN) Overview

Deep learning has been used in several medical applications such as brain tumor, skin lesion, iris defect, breast cancer and finally COVID-19 detection. By applying deep learning techniques, efficient, fast, safe, and accurate COVID-19 detection can be implemented. Deep learning networks consist of several layers to provide feature extraction and classification of input images. The CNN is considered as an important tool of deep learning that deals with images [18]. Frequently, it is used in several medical applications. The name CNN is attributed to applying convolution kernels in the input layer. The CNN structure consists of a stack of layers. The first layers are usually convolution layers that detect features such as edge and shape of the image. The output of the convolution layer is computed from the following relation [19],where k refers to the input map, is the input of the convolution layer, is the corresponding weight of the layer, b is the bias and is the activation function, which can be Rectifier Linear Unit (ReLU), Softmax or any other function. The ReLU activation provides faster training by taking only the positive values and removing the negative ones according to the following relation:Then, pooling layer is used to reduce the number of weights depending on the window size. Therefore, pooling layers perform a down-sampling operation on their inputs. Hence, the output of the pooling layer can be expressed as,where represents the down-sampling function and is the input to the pooling layer. There are two types of pooling, which are max-pooling and average pooling. The max-pooling gives the maximum values of the selected windows, while the average pooling provides the average values of the windows. Finally, the fully-connected (classification) layer calculates the likelihood of each class from the output features of the previous steps to classify the images. Therefore, the classification layer has the same concepts of traditional neural networks. It is worth mentioning that another Softmax activation layer is usually used at the end of the network after the fully-connected layer. Moreover, batch normalization can be applied after each convolution to accelerate the process for better training. Also, to reduce over-fitting, dropout layers can be used, which drops some random selected neurons or randomly sets some weights to zero. The concept of deep transfer learning depends on the utilization of the well-known pre-designed networks for different classification categories [20]. A popular pre-trained network is Alexnet, which was proposed in 2012 [17]. It has five convolution layers with two fully-connected layers. The VGG16 was discussed in 2014 [12]. It has more parameters and more deeper convolution filters than those of Alexnet. In VGG19 [21], the number of layers is 19 instead of 16 in the VGG16. The drawback of VGG networks is the slow training performance due to large weights. GoogleNet was presented in 2014 [13]. The depth of this network is 22 layers. The input to this network must be of size . It was trained on the ImageNet dataset with an output of 1000 categories. It enhances the accuracy of classification and recognition by using nine inception layers [22]. ResNet was discovered in 2015 [11, 23]. It is a residual network that has different layers as in the ResNet18, 50, 101, 152 and 1202. It includes convolution, max-pooling and fully-connected layers. ResNet18 has two branches including residual connection with a feed-forward network. It contains 11 million parameters. DenseNet was applied in 2017 [16]. It depends on dense connections between CNN layers.

The Proposed System Architecture

In this section, the proposed system for COVID-19 detection is presented. It is based on a DCNN for image feature extraction as shown in Fig. 1. The transmitter consists of sensors, image compression and modulation. The receiver is composed of a demodulation block, image processing, deep feature extraction using DCNN, and finally a classification layer for COVID-19 detection.
Fig. 1

The proposed wireless system architecture

The proposed wireless system architecture

The Transmitter

At the transmitter side, firstly the X-ray image of the patient is produced by using the appropriate sensors. Then, it is compressed through a resizing strategy into a size of in order to reduce the transmission bandwidth. Finally, the compressed image need to be efficiently transmitted over the wireless channel at low Signal-to-Noise Ratios (SNRs). Different digital modulation techniques including Frequency-shift keying (FSK), Binary Phase Shift Keying (BPSK), Quadrature Phase Shift Keying (QPSK), and Quadrature Amplitude Modulation (M-QAM) are compared to select the most reliable one [24, 25]. For FSK modulation, the signal is transformed to binary zeros and ones as follows:where and are the carrier frequencies assigned to the binary zeros and ones, respectively. For the BPSK technique, the transmitted signal is represented withwhere is the carrier frequency and Y is the binary bit, which is 0 or 1. The QPSK signal is represented as follows:where I and Q are the binary bits for the I and Q channels of the input signal, respectively. For QAM, the transmitted signal can be considered with Eq. (7) [24]. The amplitudes of I and Q channels are determined according to the number of bits of the M-QAM constellation as shown in Table 1.
Table 1

Constellations of M-QAM

ConstellationModulation sizeNumber of bits (n)
QAM42
16-QAM164
64-QAM646
M-QAMM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$log_{2}M$$\end{document}log2M
Constellations of M-QAM

The Receiver

The transmitted signal is contaminated with Additive White Gaussian Noise (AWGN). Therefore, the received signal iswhere h(t) is the AWGN channel impulse response and n(t) is the receiver noise. At the receiver side, the received signal is firstly filtered by a Band-Pass Filter (BPF) with a sufficient pass-band. Then, it is demodulated according to the applied modulation technique at the transmitter. The quality of the demodulation can be defined by the BER. The theoretical BER of M-QAM signal can be calculated by:where M is the modulation size, erfc is the complementary error function and is the energy-per-bit to noise power spectral density ratio. The theoretical BER for different modulation techniques over the AWGN channel is summarized in Table 2 [25].
Table 2

BER for different modulation techniques over AWGN channel

SchemeBER
FSK\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{2}erfc\sqrt{\frac{E_{b}}{2N_{o}}}$$\end{document}12erfcEb2No
BPSK\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{2}erfc\sqrt{\frac{E_{b}}{N_{o}}}$$\end{document}12erfcEbNo
QPSK\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{2}erfc\sqrt{\frac{E_{b}}{N_{o}}}$$\end{document}12erfcEbNo
M-PSK\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{log_{2}M}erfc\sqrt{log_{2}M\frac{E_{b}}{N_{o}}}sin\left( \frac{\pi }{M}\right)$$\end{document}1log2Merfclog2MEbNosinπM
64-QAM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{7}{24}erfc\sqrt{\frac{18E_{b}}{126N_{o}}}$$\end{document}724erfc18Eb126No
BER for different modulation techniques over AWGN channel Afterwards, the demodulated image is prepared for the proposed pre-trained CNN model by removing the noise on the image. The noise is attributed to different sources in the transmission system. The Weiner filter is used for noise reduction. Thereafter, deep features of the processed X-ray images are extracted by the proposed DCNN model. Previously, the DCNN model was trained (as will be discussed in the next subsection) to adjust its parameters. Then, the adapted parameters are utilized for the validation process. Finally, the Softmax classifier is implemented to detect the COVID-19 cases from the used X-ray images.

CNN Model Training

The proposed DCNN model is shown in Fig. 2. It consists of six convolution layers with batch normalization and ReLU activation function to give output features, three maximum pooling layers, a Global Average Pooling (GAP) layer, two fully-connected layers, a Softmax layer, and an output classification layer. The parameter definitions of the proposed DCNN model are summarized in Table 3.
Fig. 2

The proposed DCNN model

Table 3

Description of the proposed DCNN model

Name#FiltersFilter sizeStridePaddingWeightsOutput
Input layer of size [224 224 3]
Conv_132\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 3$$\end{document}3×3×3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\times 1$$\end{document}1×1[1 1 1 1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 3\times 32$$\end{document}3×3×3×32\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$224\times 224\times 32$$\end{document}224×224×32
Batch normalization + ReLU
Max.Pool_1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document}2×2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document}2×2[0 0 0 0]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$112\times 112\times 32$$\end{document}112×112×32
Conv_264\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 32$$\end{document}3×3×32\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document}2×2[1 1 1 1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 32\times 64$$\end{document}3×3×32×64\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$56\times 56\times 64$$\end{document}56×56×64
Batch normalization + ReLU
Max.Pool_2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document}2×2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3$$\end{document}3×3[0 0 0 0]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$19\times 19\times 64$$\end{document}19×19×64
Batch normalization + ReLU
Conv_364\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\times 1\times 64$$\end{document}1×1×64\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3$$\end{document}3×3[0 0 0 0]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\times 1\times 64\times 64$$\end{document}1×1×64×64\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$19\times 19\times 64$$\end{document}19×19×64
Batch normalization + ReLU
Addition_1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$19\times 19\times 64$$\end{document}19×19×64
Conv_4256\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 64$$\end{document}3×3×64\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\times 1$$\end{document}1×1[1 1 1 1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 64\times 256$$\end{document}3×3×64×256\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$19\times 19\times 256$$\end{document}19×19×256
Batch normalization + ReLU
Conv_5256\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5\times 5\times 32$$\end{document}5×5×32\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document}2×2[1 1 1 1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5\times 5\times 32\times 256$$\end{document}5×5×32×256\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$56\times 56\times 256$$\end{document}56×56×256
Batch normalization + ReLU
Max.Pool_3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5\times 5$$\end{document}5×5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3$$\end{document}3×3[1 1 1 1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$19\times 19\times 256$$\end{document}19×19×256
Addition_2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$19\times 19\times 256$$\end{document}19×19×256
Conv_6512\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 256$$\end{document}3×3×256\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document}2×2[1 1 1 1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\times 3\times 256\times 512$$\end{document}3×3×256×512\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10\times 10\times 512$$\end{document}10×10×512
Batch normalization + ReLU
GAP\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10\times 10$$\end{document}10×10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\times 1$$\end{document}1×1[0 0 0 0]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1\times 1\times 512$$\end{document}1×1×512
Two fully-connected layers\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 512$$\end{document}2×512
SoftMax layer
Classification output layer2
The proposed DCNN model Description of the proposed DCNN model The inputs are X-ray images that are collected by specialists. Samples of training X-ray images are shown in Fig. 3. The visual representations of some extracted features through the convolution layer using 32 filters are shown in Fig. 4.
Fig. 3

Examples of COVID-19 and Non-COVID chest X-ray images

Fig. 4

Visual representations of output features through the first convolution layer with 32 filters

Examples of COVID-19 and Non-COVID chest X-ray images Visual representations of output features through the first convolution layer with 32 filters

Performance Metrics

In order to study the proposed model performance, the most important performance metrics are selected including accuracy, loss, confusion matrix, sensitivity (recall), precision, score, specificity and ROC curve. The accuracy can be defined as the ratio between the true prediction cases and the total prediction cases. It can be written as:where , , and are true negative, true positive, false negative and false positive, respectively. The true positive is the probability that the true case is COVID-19, and it is correctly detected as COVID-19 by the network. The false positive is the probability of false detection of normal cases as positive COVID-19 cases. The true negative is the probability of true detection of normal cases. Finally, the false negative is the probability that the true case is COVID-19, and it is wrongly detected as a normal case. The loss or the error rate is a complement of accuracy. It can be calculated as follows:The specificity is used to measure the ratio of negative cases that are correctly detected. It can be measured as follows:The precision, as shown in Eq. 13, is defined as the ratio between the true positive cases and the total of true positive cases and false positive cases.The sensitivity (or Recall) is used to measure the ratio of true positive cases that are correctly classified. It is calculated as follows:Moreover, the network performance can be evaluated with score, which depends on the values of both precision and recall as shown.Additionally, ROC curve can be used to measure the network performance. It describes the relation between true positive rate (sensitivity) and false positive rate (specificity). Furthermore, the confusion matrix is another measurement tool for the network performance. It contains information about , , and values.

Simulation Analysis

Matlab 2019b is used to train and test the proposed model. The training is performed on a CPU with Windows 10 operating system with properties of Intel core i7 @1.99 GHz processor and 8 GRAM. The input data for the proposed system is chosen as chest X-ray images for the detection of COVID-19 cases. For the proposed system, different digital modulation techniques are compared including FSK, BPSK, QPSK, 8-PSK, 16-PSK, 32-PSK, 4-QAM, 16-QAM, and 64-QAM. The quality of demodulation can be defined by the BER. As shown in Fig. 5, to achieve a , the required can be estimated as illustrated in Table 4. Therefore, the best performance can be obtained by using either BPSK, QPSK or QAM. Hence, QPSK or QAM is chosen for bandwidth requirements.
Fig. 5

vs. BER for various digital modulation techniques over AWGN channel

Table 4

for various digital modulation techniques at

Modulation Technique\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E_{b}/N_{o}$$\end{document}Eb/No (dB)
BPSK, QPSK and 4-QAM8.4
FSK11.4
8-PSK11.6
16-QAM12.2
16-PSK16
64-QAM16.5
32-PSK21
for various digital modulation techniques at vs. BER for various digital modulation techniques over AWGN channel Thereafter, image processing is performed to adjust the size of each demodulated chest X-ray image to be . For the deep feature extraction process, the proposed CNN model is firstly trained to adjust its parameters, and it is used to extract the required features for the classification process. The dataset used for training and testing processes is available at [26]. It contains 219 COVID-19 positive cases and 2686 Non-COVID cases. Randomly, 219 Non-COVID X-ray images are selected for training and validation processes. The prepared dataset is divided into 70 % for training and 30 % for validation. In the training phase, the CNN hyper-parameters are adjusted according to the forward and the backward steps until reaching the minimum error. These hyper-parameters are the optimization algorithm (SGDM, Adam, and RMS Prop), the LR (0.001 and 0.0001), the number of epochs (30, 40 and 50) and the Mini-Batch (MB) size (16, 32 and 64). In the testing phase, all tuned hyper-parameters are utilized. Then, the extracted features for any new demodulated X-ray image are classified using the Softmax layer. Hence, the output decision of COVID-19 or Non-COVID is determined. Tables 5, 6, and 7 represent the performance results of the proposed CNN model using 30, 40, and 50 epochs, respectively. Furthermore, Figs. 6, 7, and 8 introduce both the accuracy and loss for each case.
Table 5

Performance of the proposed CNN for 30 epochs

OptimizationMB SizeLRAccuracy (%)Precision (%)Recall (%)Specificity (%)F1 score (%)
Adam160.00193.191.496.989.493.4
0.000193.991.496.690.994.1
320.00194.692.796.992.494.8
0.000195.494.196.993.995.5
640.00187.989.086.489.487.7
0.000190.294.984.895.489. 6
RMS Prop160.00190.194.984.895.489.6
0.000191.693.689.393.991.4
320.00183.475.698.486.1885.5
0.000186.492.878.893.985.2
640.00184.289.577.290.982.9
0.000189.498.180.398.488.3
SGDM160.00189.385.195.483.390.0
0.000190.992.189.492.490.7
320.00191.796.686.396.991.2
0.000193.995.392.495.593.8
640.00187.891.683.392.487.3
0.000185.696.174.296.983.8
Table 6

Performance of the proposed CNN for 40 epochs

OptimizationMB SizeLRAccuracy (%)Precision (%)Recall (%)Specificity (%)F1 score (%)
Adam160.00192.491.293.990.992.5
0.000193.290.196.989.393.4
320.00190.289.590.989.390.2
0.000193.991.496.990.994.1
640.00185.485.485.290.587.9
0.000191.696.686.396.991.2
RMS Prop160.00185.585.490.485.887.9
0.000188.987.588.587.987.9
320.00190.990.990.990.990.9
0.000193.896.790.996.993.7
640.00180.680.082.380.380.9
0.000185.884.880.284.982.5
SGDM160.00190.286.795.287.390.8
0.000191.792.390.992.491.6
320.00194.394.095.493.994.7
0.000195.596.893.996.995.4
640.00187.083.891.984.087.7
0.000189.393.384.993.988.9
Table 7

Performance of the proposed CNN for 50 epochs

OptimizationMB SizeLRAccuracy (%)Precision (%)Recall (%)Specificity (%)F1 score (%)
Adam160.00191.692.390.992.491.6
0.000193.791.496.990.994.1
320.00189.490.687.890.989.2
0.000191.795.087.895.491.3
640.00190.187.393.986.490.5
0.000192.493.790.993.992.3
RMS Prop160.00190.386.891.195.290.8
0.000192.591.193.890.992.5
320.00193.290.296.194.887.8
0.000194.792.796.992.494.8
640.00189.386.193.984.889.9
0.000192.492.492.492.492.4
SGDM160.00195.694.196.993.995.5
0.000197.898.497.098.597.7
320.00194.694.095.593.994.7
0.000196.895.298.295.496.9
640.00190.990.990.990.990.9
0.000192.491.193.990.992.5
Fig. 6

Accuracy vs. iterations and loss vs. iterations of the proposed CNN using Adam optimization algorithm, where max. epochs = 30, MB size = 32, and LR = 0.0001

Fig. 7

Accuracy vs. iterations and loss vs. iterations of the proposed CNN using SGDM optimization algorithm, where max. epochs = 40, MB size = 32, and LR = 0.0001

Fig. 8

Accuracy vs. iterations and loss vs. iterations of the proposed CNN using SGDM optimization algorithm, where max. epochs = 50, MB size = 16, and LR = 0.0001

The summary of the performance metrics through the three epochs is illustrated in Fig. 9. It is clear that the proposed network using 50 epochs achieves the best performance. It attains an accuracy of 97.7 %, a precision of 97.0 %, a sensitivity of 98.4 % and an score of 97.7 %. Hence, the optimization algorithm is selected to be the SGDM. The MB size is 16 and the LR is 0.0001.
Fig. 9

Metric comparison for different numbers of epochs = 30, 40, and 50

Performance of the proposed CNN for 30 epochs Accuracy vs. iterations and loss vs. iterations of the proposed CNN using Adam optimization algorithm, where max. epochs = 30, MB size = 32, and LR = 0.0001 Performance of the proposed CNN for 40 epochs Accuracy vs. iterations and loss vs. iterations of the proposed CNN using SGDM optimization algorithm, where max. epochs = 40, MB size = 32, and LR = 0.0001 Performance of the proposed CNN for 50 epochs Accuracy vs. iterations and loss vs. iterations of the proposed CNN using SGDM optimization algorithm, where max. epochs = 50, MB size = 16, and LR = 0.0001 Metric comparison for different numbers of epochs = 30, 40, and 50 The performance of the proposed CNN model is compared with those of ResNet18, GoogleNet, and DenseNet CNNs. The optimum hyper-parameters are chosen. The batch size is 16, the number of epochs is 50, the starting learning rate is 0.0001, and the optimization algorithm is SGDM. As investigated in Figs. 10 and 11, the proposed CNN model achieves the highest accuracy and less loss compared with those of other pre-trained models. The accuracy depends on the values of true positive cases and true negative cases. The high accuracy and the low loss results ensure the power of the model to correctly distinguish between COVID-19 and the non-COVID cases.
Fig. 10

Accuracy comparison

Fig. 11

Loss comparison

Accuracy comparison Loss comparison Furthermore, the ROC curves are shown in Fig. 12. The ROC curve describes the relationship between true positive rate (sensitivity) and false positive rate (specificity). It can be used for AUC calculations. The AUC is 98.8 % for the proposed model, which outperforms other pre-trained networks. Therefore, ResNet18 has a superior performance by only 0.1% difference. This small degradation is due to the high false positive rate of the proposed model.
Fig. 12

ROC curves

Finally, Table 8 introduces a comparison between the proposed CNN model and the state-of-the-art methodologies. From the results, the superiority of the proposed CNN model can be proved in COVID-19 detection with high accuracy.
Table 8

Comparison with related work

MethodologyPrecision (%)Specificity (%)Accuracy (%)AUC (%)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F_{1}$$\end{document}F1 score (%)Sensitivity (%)
[27]80.5N/AN/A91.4N/AN/A
[28]98.2N/A92.286.799.6N/A
[29]96.0N/A70.7N/A95.2N/A
[30]98.2N/A92.2N/A99.6N/A
[31]90.7N/A91.183.595.2N/A
[32]90.7N/A83.387.9N/AN/A
[33]90.0N/A96.0N/A96.0N/A
[34]94.4N/A96.1N/A95.797.0
InceptionV391.291.392.289.487.690.4
SqueezeNet89.689.285.490.789.086.5
MobileNet92.492.392.194.590.889.5
VGG1697.497.594.797.794.590.9
DenseNet93.192.993.997.995.592.4
ResNet96.296.496.199.996.996.0
GoogleNet94.594.392.295.695.194.0
The proposed CNN98.498.597.898.997.797.0
ROC curves Comparison with related work

Conclusion

Fast detection of COVID-19 has become an urgent demand. In this paper, an efficient wireless system based on DCNN for COVID-19 diagnosis has been introduced. For the wireless transmission, the QPSK modulation has been chosen due to its high reliability among different modulation techniques. The DCNN architecture is divided into feature extraction and classification sub-blocks. It consists of six convolution, three max-pooling, one average pooling, two fully-connected and Softmax layers. X-ray images with dimensions of have been used. Firstly, the proposed model has been trained to adjust its hyper-parameters. Hence, the SGDM optimization algorithm has been selected with an LR of 0.0001,  an MB size of 16 and 50 epochs. Then, the tuned parameters have been utilized for the testing phase to classify the demodulated X-ray images. From simulation results, the proposed model has provided superior performance, when compared with other powerful related networks. It has achieved a high accuracy of 97.7 % , a sensitivity of 98.4 % and an AUC of 98.8 %.
  10 in total

1.  Large-scale screening of COVID-19 from community acquired pneumonia using infection size-aware classification.

Authors:  Feng Shi; Liming Xia; Fei Shan; Bin Song; Dijia Wu; Ying Wei; Huan Yuan; Huiting Jiang; Yichu He; Yaozong Gao; He Sui; Dinggang Shen
Journal:  Phys Med Biol       Date:  2021-02-19       Impact factor: 3.609

2.  Added Value of Ultra-low-dose Computed Tomography, Dose Equivalent to Chest X-Ray Radiography, for Diagnosing Chest Pathology.

Authors:  Lucia J M Kroft; Levinia van der Velden; Irene Hernández Girón; Joost J H Roelofs; Albert de Roos; Jacob Geleijns
Journal:  J Thorac Imaging       Date:  2019-05       Impact factor: 3.000

Review 3.  A Review of Coronavirus Disease-2019 (COVID-19).

Authors:  Tanu Singhal
Journal:  Indian J Pediatr       Date:  2020-03-13       Impact factor: 1.967

4.  Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China.

Authors:  Chaolin Huang; Yeming Wang; Xingwang Li; Lili Ren; Jianping Zhao; Yi Hu; Li Zhang; Guohui Fan; Jiuyang Xu; Xiaoying Gu; Zhenshun Cheng; Ting Yu; Jiaan Xia; Yuan Wei; Wenjuan Wu; Xuelei Xie; Wen Yin; Hui Li; Min Liu; Yan Xiao; Hong Gao; Li Guo; Jungang Xie; Guangfa Wang; Rongmeng Jiang; Zhancheng Gao; Qi Jin; Jianwei Wang; Bin Cao
Journal:  Lancet       Date:  2020-01-24       Impact factor: 79.321

5.  A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2.

Authors:  Mohammad Rahimzadeh; Abolfazl Attar
Journal:  Inform Med Unlocked       Date:  2020-05-26

6.  Dense U-net Based on Patch-Based Learning for Retinal Vessel Segmentation.

Authors:  Chang Wang; Zongya Zhao; Qiongqiong Ren; Yongtao Xu; Yi Yu
Journal:  Entropy (Basel)       Date:  2019-02-12       Impact factor: 2.524

7.  COVID-19 classification using deep feature concatenation technique.

Authors:  Waleed Saad; Wafaa A Shalaby; Mona Shokair; Fathi Abd El-Samie; Moawad Dessouky; Essam Abdellatef
Journal:  J Ambient Intell Humaniz Comput       Date:  2021-03-02

8.  A Novel Coronavirus from Patients with Pneumonia in China, 2019.

Authors:  Na Zhu; Dingyu Zhang; Wenling Wang; Xingwang Li; Bo Yang; Jingdong Song; Xiang Zhao; Baoying Huang; Weifeng Shi; Roujian Lu; Peihua Niu; Faxian Zhan; Xuejun Ma; Dayan Wang; Wenbo Xu; Guizhen Wu; George F Gao; Wenjie Tan
Journal:  N Engl J Med       Date:  2020-01-24       Impact factor: 91.245

9.  Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy.

Authors:  Lin Li; Lixin Qin; Zeguo Xu; Youbing Yin; Xin Wang; Bin Kong; Junjie Bai; Yi Lu; Zhenghan Fang; Qi Song; Kunlin Cao; Daliang Liu; Guisheng Wang; Qizhong Xu; Xisheng Fang; Shiqin Zhang; Juan Xia; Jun Xia
Journal:  Radiology       Date:  2020-03-19       Impact factor: 11.105

  10 in total
  1 in total

1.  Design of Cloud Storage-Oriented Sports Physical Fitness Monitoring System.

Authors:  Zhou Zheng; Yang Liu
Journal:  Comput Intell Neurosci       Date:  2022-06-10
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.