Literature DB >> 34366576

Wavelet and deep learning-based detection of SARS-nCoV from thoracic X-ray images for rapid and efficient testing.

Amar Kumar Verma1, Inturi Vamsi2, Prerna Saurabh3, Radhika Sudha1, Sabareesh G R2, Rajkumar S3.   

Abstract

This paper proposes a wavelet and artificial intelligence-enabled rapid and efficient testing procedure for patients with Severe Acute Respiratory Coronavirus Syndrome (SARS-nCoV) through a deep learning approach from thoracic X-ray images. Presently, the virus infection is diagnosed primarily by a process called the real-time Reverse Transcriptase-Polymerase Chain Reaction (rRT-PCR) based on its genetic prints. This whole procedure takes a substantial amount of time to identify and diagnose the patients infected by the virus. The proposed research uses a wavelet-based convolution neural network architectures to detect SARS-nCoV. CNN is pre-trained on the ImageNet and trained end-to-end using thoracic X-ray images. To execute Discrete Wavelet Transforms (DWT), the available mother wavelet functions from different families, namely Haar, Daubechies, Symlet, Biorthogonal, Coiflet, and Discrete Meyer, were considered. Two-level decomposition via DWT is adopted to extract prominent features peripheral and subpleural ground-glass opacities, often in the lower lobes explicitly from thoracic X-ray images to suppress noise effect, further enhancing the signal to noise ratio. The proposed wavelet-based deep learning models of both, two-class instances (COVID vs. Normal) and four-class instances (COVID-19 vs. PNA bacterial vs. PNA viral vs. Normal) were validated from publicly available databases using k-Fold Cross Validation (k-Fold CV) technique. In addition to these X-ray images, images of recent COVID-19 patients were further used to examine the model's practicality and real-time feasibility in combating the current pandemic situation. It was observed that the Symlet 7 approximation component with two-level manifested the highest test accuracy of 98.87%, followed by Biorthogonal 2.6 with an efficiency of 98.73%. While the test accuracy for Symlet 7 and Biorthogonal 2.6 is high, Haar and Daubechies with two levels have demonstrated excellent validation accuracy on unseen data. It was also observed that the precision, the recall rate, and the dice similarity coefficient for four-class instances were 98%, 98%, and 99%, respectively, using the proposed algorithm.
© 2021 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  COVID-19; Medical imaging; Transfer learning; Wavelets; rRT-PCR

Year:  2021        PMID: 34366576      PMCID: PMC8327617          DOI: 10.1016/j.eswa.2021.115650

Source DB:  PubMed          Journal:  Expert Syst Appl        ISSN: 0957-4174            Impact factor:   6.954


Introduction

The outbreak of the contagious disease COVID-19, which was declared a pandemic by WHO in late 2019 and early 2020, impacted humanity and the global economy. The epidemic is thought to have originated in December 2019 in Wuhan, Hubei Province, China (Ma, 2020). COVID-19 is caused by a virus that belongs to the Coronaviridae subfamily of the order Nidovirales. Severe acute respiratory syndrome, Middle East Respiratory Syndrome (MERS), and COVID-19 are all driven by these viruses (Organization, 2020, Sohrabi et al., 2020). COVID-19 is a new form of the virus that produces a variety of symptoms. It is classified as SARS-CoV-2. Fever, dry cough, tiredness, fatigue, headache, sore throat, muscular pain, diarrhea, loss of taste or smell, skin rash, myalgias, shortness of breath, thoracic discomfort, loss of speech or movement, respiratory sickness are only a few of the symptoms. Severe infection can cause pneumonia, heart damage, multi-organ failure, and death in certain individuals (W.H.O., 2020). The virus transmits from person to person through the droplet of an infected individual’s sneeze or cough due to its infectious nature. Furthermore, if an infected person contacts the droplet and inhales the virus-laden air, the second person may get the infection quickly. According to the World Health Organization, Globally, as of 5:06pm CEST, 1 June 2021, there have been 17,04,26,245 confirmed cases of COVID-19, including 35,48,628 deaths, reported to WHO. As of 30 May 2021, a total of 157,94,16,705 vaccine doses have been administered (Worldometer, 2020). Because the development of a vaccine is still in its early stages, early detection of COVID-19 illness is critical for reducing and controlling the spread of the disease (Catanzaro, Fagiani, Racchi, Corsini, Govoni, & Lanni, 2020). If a patient is infected, they must be isolated as soon as possible to prevent the infection from spreading further. Currently, the process known as reverse transcription-polymerase chain reaction (RT-PCR) is utilized for testing and diagnosing whether or not a person is infected with COVID-19 (Corman et al., 2020, Mei et al., 2020, Patchsung et al., 2020, Wang, Xu et al., 2020). The respiratory samples are used in this test, and the results are available in four to eight hours. Rapid diagnostic tests (RDT), enzyme-linked immunosorbent assay (ELISA), neutralization assays, chemiluminescent immunoassays, and binding antibodies are all used in certain antibody studies to detect COVID-19. Many health organizations, however, employ a thoracic X-ray scan image to diagnose lung disease on occasion. Most nations ran out of testing kits in the early phases of the COVID-19 outbreak. In comparison to the RT-PCR results (low sensitivity, 60%–70%), the CT images-based lab test (sensitivity, 98%) showed promising findings for the COVID-19 infection rate (Fang et al., 2020, Harmon et al., 2020). Moreover, a few studies claim that lab reports and clinical image aspects are even better for early COVID-19 identification (Lee et al., 2020, Long et al., 2020, Shi et al., 2020). Many health organizations recommend thoracic X-ray imaging as a promising alternative for early COVID-19 screening (Bressem, Adams, Erxleben, Hamm, Niehues, & Vahldiek, 2020). With the developments in AI, the advancements in pattern recognition and machine learning can be exploited to automate the detection of COVID-19 from the X-ray images (Greenspan et al., 2020, Luengo-Oroz et al., 2020, Ting et al., 2020). In the past, deep learning-based methods have provided encouraging results in the medical field such as breast cancer detection (Celik et al., 2020, Cruz-Roa et al., 2014), skin cancer classification (Codella et al., 2017, Esteva et al., 2017), brain disease classification (Talo, Yildirim, Baloglu, Aydin, & Acharya, 2019), arrhythmia detection (Hannun et al., 2019), pneumonia detection from X-ray images (Rajpurkar et al., 2017), and lung segmentation (Gaál et al., 2020, Souza et al., 2019). An accurate, fast, and efficient model using a wavelet-based deep learning model is proposed to detect covid infection automatically. The proposed work uses wavelet-based convolution neural network architectures to detect SARS-nCoV. CNN is pre-trained on the ImageNet and trained end-to-end using thoracic X-ray images. Two-level decomposition via DWT is adopted to extract prominent features explicitly from thoracic X-ray images of publicly accessible databases to detect SARS-nCoV. Many past studies have reported the use of deep learning techniques in detecting diseases at an early stage. The scientific community has tried to extend this learning in their fight against the coronavirus. Early detection can reduce the severity of an affected person and prevent further spreading by isolating the affected person.

Literature survey

COVID-19 detection from thoracic X-ray imaging and computed tomography images is the focus of many of these studies in the literature. To identify SARS-nCoV, the conventional strategy primarily employs a process known as transfer-learning. Typically, limited data sets are used to perform the transfer learning approach. Transfer learning takes knowledge from a large collection of data acquired by CNNs and applies it to a new data set to complete other comparable tasks. In the beginning, CNNs are trained to extract the most notable features from a large-scale dataset for certain tasks and outcomes. Their efficiency is evaluated to determine if it is suitable for transfer learning. This pre-trained CNN is then utilized to extract features from a new set of images via transfer learning while keeping both its initial architecture and the weights it has learned. After that, the new model is utilized for classification. This technology would have a substantial influence on automated X-ray image identification and retrieval for SARS-nCoV detection. AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, and Xception were employed in the majority of earlier research (Minaee, Kafieh, Sonka, Yazdani, & Soufi, 2020). Xu et al. (2020) introduced ResNet-18 pre-trained network with an architecture for location attention. This research aimed to develop an early screening model to identify COVID-19 pneumonia from Influenza-A-viral pneumonia (IAVP) and healthy cases through pulmonary CT images using deep learning techniques. Finally, the type of infection and the overall confidence score for each CT case were determined using the Noisy- or Bayesian function. Zhang, Xie, Li, Shen and Xia (2020) mentioned a concise duration of cluster formation in viral pneumonia. This paper has used the classification of one class to tackle the problem of anomaly detection. It uses Confidence-Aware Anomaly Detection (CAAD) mode to differentiate viral pneumonia from non-viral pneumonia. This model has different roles, such as extractor feature, anomaly detector, and predictor of confidence. If the detector of anomaly scores large or the predictor of confidence is small, the model accepts the instance as an input indicating viral pneumonia. This model takes all cases of viral pneumonia rather than taking individual cases as anomalies. Wang and Wong (2020) proposed the COVID-Net and deep CNN that aims to make the right prediction based on their methods to give better results in the screening process. It also promises to provide valid information from the CXR images. Salman, Abu-Naser, Alajrami, Abu-Nasser, and Alashqar (2020) used an inception v3 architecture for two-class instances (COVID-19 and non-COVID) and got an accuracy of 100%. The main drawbacks of this method were that they used a small dataset and ignored the SARS, MERS, and ARDS images. Sethy and Behera (2020) have proposed a ResNet 50 and SVM model that can detect patients infected by the COVID using X-ray images using a deep feature technique that could be useful for medical practitioners. Wang, Kang et al. (2020) proposed DenseNet121 architecture to characterize the COVID-19 images and found multiple factors responsible for identifying patients with COVID or finding high-risk patients. The deep learning system automatically focused on suspicious areas that displayed conflicting characteristics with reported radiological findings without human intervention. Narin, Kaya, and Pamuk (2020) proposed an automated system be introduced to diagnose people infected with COVID-19 using the CNN model that uses ResNet-50, InceptionV3, InceptionResNetV2 X-ray images. This model uses an uncertainty matrix to show a 5-fold cross-validation technique. It is demonstrated that the pre-trained ResNet50 model provides the best accuracy in classification compared to the other two methods proposed. Gozes et al. (2020) used a lung segmentation module to extract the features and ResNet-50 architecture for classification, achieving a sensitivity of 94% and specificity of 98%. This paper proposed a framework that utilizes robust 2D and 3D deep learning models, modifying and adapting existing AI models and combining them with clinical understanding. Many retrospective studies are being performed to understand and analyze the COVID patients diagnosed and estimate each patient’s disease evolution. Apostolopoulos and Mpesiana (2020) aimed to evaluate the performance of a state-of-the-art CNN architecture proposed over the years for medical image classification. VGG-19 architecture was used for the automatic diagnosis of COVID-19 from the thoracic X-rays. Barstugan, Ozkaya, and Ozturk (2020) used a range of extraction methods such as Gray Level Co-occurrence Matrix, Local Directional Pattern, Gray Level Run Length Matrix, Gray-Level Size Zone Matrix, and Discrete Wavelet Transform algorithms. Using the function, an accuracy of 99.68% was achieved using this method. The circumstances shown by CT images (bronchiectasis, signs of lesion swelling, shadowing, etc.) provide a simple diagnosis of COVID-19. It has also used cross-validation 2-fold, 5-fold, and 10-fold. Hemdan, Shouman, and Karar (2020) have introduced a new paradigm for a pre-trained deep learning classifier, i.e., COVIDX-Net, to help radiologists classify or validate COVID-19 2D X-ray images automatically. DCNNs are used to identify common thoracic diseases in CT images, such as Tuberculosis screening and mediastinal lymph nodes. However, its COVID-19 detection application is still limited. Those DCNNs model are: VGG19, MobileNetV2, Xception, InceptionResNetV2, InceptionV3, ResNetV2 and DenseNet121. Each deep neural network model will analyze the normalized intensities and ground-glass opacity of the X-ray images to identify the patient status in either negative or positive COVID-19 instances. A summary of the COVID-19 detection methods using thoracic X-ray and CT images is elaborated in Table 1. It shows that many methods depend on the transfer learning and used a minimal dataset. There is a broader scope for improving the classification efficiency using tested and proved models that can use wavelet-based feature extraction. Wavelet Transform (WT) measures the correlation between the input image and the mother Wavelet through an adaptable 2D time-scale window. It performs the translation (time-domain) and dilation (scaling) to decompose the signal into a series of wavelet coefficients. Due to its multi-resolution property, WT can capture the change in the transient energy phenomenon contained by the signal/image. Thus, it has various applications in anomaly detection, edge detection, signal, and image processing (Inturi et al., 2020, Wang and Du, 2019). Discrete Wavelet Transform (DWT) decomposes the input image into a set of approximation and detail coefficients. Further, each previous level approximation coefficient is subjected to the next level of decomposition and so on. Various authors have executed the DWT algorithm to de-noise the raw signals captured through multiple sensors and accomplished the fault diagnosis in various rotating machinery such as gearbox and motors (Radhika et al., 2010, Radhika et al., 2015, Vamsi et al., 2019). Praveen, Vamsi, Suresh, and Radhika (2020) performed three-level DWT decomposition and extracted statistical indicators from the sub-images to evaluate the surface roughness of formed specimens. Nazarahari, Namin, Markazi, and Anaraki (2015) decomposed the electrocardiography signals through four-level DWT and generated a feature vector database. The authors have further classified the feature vector database through the multilayer perceptron algorithm to distinguish between the various cardiac arrhythmias. Chakraborty and Nandy (2020) have combined the DWT and deep neural network algorithms to classify the pathological gait abnormality pattern.
Table 1

Summary of the COVID-19 detection and diagnostic methods for thoracic X-ray and CT image.

AuthorsImage modalityMethod usedCOVID-19 positive casesAccuracy (in %)
Xu et al. (2020)CT imagesResNet-18 + location attention219 (+)86.70
Zhang, Satapathy et al. (2020)X-ray imagesResNet-18 + classification head + anomaly detection head100 (+)96.00
Wang and Wong (2020)X-ray imagesCOVID-Net53 (+)92.40
Sethy and Behera (2020)X-ray imagesResNet50+SVM25 (+)95.38
Wang, Zha et al. (2020)CT imagesDenseNet121924 (+)80.12
Narin et al. (2020)X-ray imagesDeep CNN ResNet-5050 (+)98.00
Gozes et al. (2020)CT imagesResNet-5050 (+)94.00
Apostolopoulos and Mpesiana (2020)X-ray imagesVGG-19244 (+)93.48
Abbas, Abdelsamea, and Gaber (2020)X-ray imagesDeTrac105 (+)95.12
Hall, Paul, Goldgof, and Goldgof (2020)X-ray imagesResNet50+VGG16135 (+)94.40
Li et al. (2020)CT imagesCOVNet400 (+)90.00
Zheng et al. (2020)CT imagesDeCoVNet313 (+)90.00
Song, Ying et al. (2020)CT imagesDRE-Net777 (+)86.00
Ozturk, Talo, Yildirim, Baloglu, Yildirim, and Acharya (2020)X-ray imagesDarkCovidNet125 (+)87.02
Khan, Shah, and Bhat (2020)X-ray imagesCoroNet284 (+)89.60
Wang, Kang et al. (2020)CT imagesM-Inception195 (+)82.90
Hassantabar, Ahmadi, and Sharifi (2020)X-ray imagesCNN100 (+)93.20
Shibly, Dey, Islam, and Rahman (2020)X-ray imagesVGG-16 based Fast R-CNN183 (+)97.36
Wang and Wong (2020)CT imagesFGCNet320 (+)97.71
Wang, Nayak, Guttery, Zhang and Zhang (2020)CT imagesCCSHNet284 (+)98.30
Zhang, Satapathy, Zhu, Górriz and Wang (2020)CT images7L-CNN-CD142 (+)94.44
Summary of the COVID-19 detection and diagnostic methods for thoracic X-ray and CT image.

Methodology

Discrete wavelet transforms or DWT implements a combination of low-pass (t) and high-pass (t) filters to achieve the approximation and detail wavelet coefficients, respectively, by down sampling the input image. Thus, a single DWT level convolutes each row and column of the input image data matrix to obtain a set of four fine-scale sub-images. The direction of the convolution of rows and columns of an input image through the low and high-pass filters represents wavelet coefficients’ orientation. As described in Fig. 1, the convolution of the low-pass filter (LPF) initially on the rows of input image followed by the columns of the image could result in approximation wavelet coefficients (A1) (Radhika, Tamura, & Matsui, 2012). Similarly, the convolution of the LPF on the rows and high-pass filter (HPF) on the columns of the image could generate the horizontal detail wavelet coefficients (hD1). Also, vertical detail coefficients (vD1) are formed by the convolution of HPF on the rows, followed by an LPF on the image columns. Finally, the HPF’s convolution on the rows, followed by the columns of the image, generates the diagonal detail wavelet coefficients (dD1). The approximation coefficients of the first level (sub-image A1) is further decomposed to achieve the next level coarse scaled wavelet coefficients, i.e., A2, hD2, vD2 & dD2, and A2 sub-image is downsampled to the next level, and this iteration continues further.
Fig. 1

Single-level decomposition through 2-D DWT.

Mathematically, DWT is expressed as, Single-level decomposition through 2-D DWT. DWT are the wavelet coefficients, is the base wavelet function, * is the complex conjugate of the base wavelet function, and p, q are the integers corresponding to scale and translation, respectively. Thus, the approximation and detail wavelet coefficients are achieved by convoluting the input image with the scale function and wavelet function represented in Eqs. (2), (3) respectively. Kumar Singh, Abdel-Nasser, Pandey, and Puig (2021) have proposed a Discrete Wavelet Transform (DWT) in conjunction with deep-learning based method and segmented the COVID-19 infected regions from the lung Computed Tomography (CT) images. However, the investigation is primarily focussed on a single case, i.e., COVID-19 CT image data. Rajpal, Lakhyani, Singh, Kohli, and Kumar (2021) have used a deep convolutional neural network (ResNet-50) to learn the features from the chest X-ray images and distinguished among the three classes, namely, normal, COVID-19 and pneumonia. Gungor (2021) de-noised the CT images through DWT using various mother wavelets and diagnosed the COVID-19 disease. It was observed that, the second-level approximation coefficients of Daubechies (db3) wavelet have yielded better results among the other mother wavelets. Chaudhary and Pachori (2021) implemented wavelet packet transform on the chest CT images and X-ray images in conjunction with pre-trained convolutional neural network to classify among the healthy, COVID-19 and pneumonia conditions. Previous studies have identified the appropriate base wavelet function by examining the energy distribution of the 1-D signal at various bands of frequency (Balavignesh, Gundepudi, Sabareesh, & Vamsi, 2018). However, the same criteria may not perform while analyzing the 2-D image. Therefore, in this investigation, all the available base wavelet functions from various families, namely, Haar, Daubechies, Symlet, Biorthogonal, Coiflet, and Discrete Meyer, are considered for executing the DWT. Two-level decomposition through DWT is adopted to suppress noise’s influence on the input image, further enhancing the signal-to-noise ratio. Initially, image augmentation is performed, and four images in the bands of red, green, blue, and gray-scale format are individually retrieved from the single input image. Later, two-level DWT is implemented on all the retrieved format of image formats, and various sub-images in the form of wavelet coefficients are achieved. Thus, a total of thirty-two sub-images (4 bands * 2 levels * 4 wavelet coefficients) are obtained from a single input image. A similar procedure is implemented on all the images from the four different health conditions (classes) such as normal, viral pneumonia, pneumonia bacterial, and COVID-19 is as shown in Fig. 2, and various sub-images in the form of wavelet coefficients are achieved. Further, the obtained sub-images are channeled as input to the CNN algorithm to distinguish between the different health conditions.
Fig. 2

Sample of thoracic X-ray image dataset for four-class instances.

Previous radiological studies have shown that image features, such as Ground-Glass Opacity (GGO) or mixed GGO and consolidation, are identical in most cases. Coronavirus disease 2019 pneumonia with bilateral, multifocal lower-lung involvement (Chung et al., 2020, Kanne, 2020, Song, Shi et al., 2020) is anticipated to have a peripheral spread. Despite negative reverse-transcription polymerase chain reaction findings, viral pneumonia features may be highly suspicious for COVID-19 infection in the sense of standard clinical presentation and exposure to other individuals with COVID-19. Repeat swab monitoring and patient isolation should be examined in these circumstances. An appropriate base wavelet function is required to decompose the input image through DWT as shown in Fig. 3, Fig. 4.
Fig. 3

Reconstructed thoracic X-ray image coefficient for db2 with DWT two-level approximation.

Fig. 4

Reconstructed thoracic X-ray image coefficient for db2 with DWT two-level.

Sample of thoracic X-ray image dataset for four-class instances. Reconstructed thoracic X-ray image coefficient for db2 with DWT two-level approximation. Reconstructed thoracic X-ray image coefficient for db2 with DWT two-level.

Wavelet-based deep CNN

As the next step, the collected DWT featured sub-images are channeled as input to the CNN algorithm to distinguish between the various health conditions, i.e. (COVID-19 vs. Pneumonia Viral vs. Pneumonia Bacterial vs. Normal). A deep convolution neural network architecture is based entirely on deeply separable convolution layers. A separable convolution in depth is commonly referred to as separable convolution in deep learning systems. CNN is broken down into a smaller set of layer functions. Each of these strata is a neural-network. A neural network may be either single or multi-layered. The entire workflow for network-based architecture for deep convolution neural network using DWT featured X-ray image is shown in Fig. 5. Artificial intelligence-enabled wavelet-based deep CNN takes only approximation component as input as yielded higher identification accuracy examined extensively in the discussions.
Fig. 5

Network-based architecture for deep CNN using DWT featured thoracic X-ray images.

Network-based architecture for deep CNN using DWT featured thoracic X-ray images.

CNN model architecture and development

Most of the conventional studies used well-known network-based CNN  architectures  such  as  SqueezeNet:  AlexNet  (Iandola, Han, Moskewicz, Ashraf, Dally, & Keutzer, 2016), VGG-16 &VGG-19 (Sengupta, Ye, Wang, Liu, & Roy, 2019), GoogleNet (Xie et al., 2018), MobileNet-V2 (Ma, Zhang, Zheng, & Sun, 2018), ResNet-18 & ResNet-50 & ResNet-101 (Szegedy, Ioffe, Vanhoucke, & Alemi, 2016), and Xception (Chollet, 2017) and turned out that only a few models such as Xception, ResNet-50, and VGG-16 achieved better detection accuracy for SARS-nCoV from thoracic X-ray images. Thus, the proposed work used these three network-based pre-trained convolution neural-network architectures for further studies. The accuracy was enhanced using the dense layer of Keras sequential model-2 from Verma, Nagpal, Desai, and Sudha (2020) research work. A typical architecture of the convolution neural network consists of the convolution layer, the subsampling layer, and a fully connected layer. All the layers mentioned above are stacked up to form an intact architecture of the CNN. In addition to these above presented key layers, CNN includes arbitrary layers such as implementing the batch to boost training time and the dropout layer to fix the overfitting issue. The proposed deep CNN Xception-I uses Xception as a base model with a dropout layer and seven fully connected layers. One of the commonly used activation functions is softmax. Activation functions are used in the model to impart non-linearity. A neural network model can behave like a linear regression model without the activation function. The softmax activation function is used in the fully connected layer for multi-label classification problem. The softmax outputs a vector representing the distributions of the probability of several possible outcomes. Xception-I has a total of 23,598,734 parameters, of which 23,544,206 are trainable, and 54,528 are non-trainable.

Training and stop learning process for CNN

The proposed wavelet-based deep convolution neural network Xception-I architecture model was implemented on top of the TensorFlow at Keras. TensorFlow is a free and open-source software library for dataflow and differentiable programming across a variety of functions. It is a symbolic math library and used for machine learning applications, such as neural networks. The image datasets collected for thoracic X-ray images are as shown in Table 2. The test data ensured that the model is not biased against the training outcomes. First, initialize weights for each layer at random, and carry out forward propagation for all input image data. The output layer error was calculated using various techniques such as RMSprop, Adam, Adamax, SGD with momentum, AdaGrad. Then a method of back-propagation was applied to adjust the weights closer to the target. The bigger the number of trainable network parameters, the greater the number of polynomials required to fit the data. The whole training phase was performed on the Google Collaboratory Ubuntu platform equipped with a Tesla K80 graphics card. The convolution neural network models were trained for different architectures with various epoch sizes, such as 40, 60, 80, 100, and 120, for learning rates of 0.0001, 1e−6 decay, batch size 10. The algorithm checks the consistency of the five-down precision to disrupt the learning process as and when progress in generalization decreases.
Table 2

Image dataset description.

SN.DiseaseNumber of Images
1Normal2398
2Pneumonia Viral3204
3Pneumonia Bacterial846
4COVID-191552
Total8000
Image dataset description.

Results and discussions

This section describes in detail the implementation of the methodology described earlier and the results obtained. First, the CNN model fed the raw thoracic X-ray images without extracting any features. CNN initially analyzes the precision of three-class instances and four-class instances in detecting SARs-nCoV and extensively investigates the accuracy and loss of their identification. The proposed Xception-I architecture has performed better for four class instances than the three-class due to more image datasets. It has also extracted features from thoracic X-ray images using a discrete wavelet transformation (DWT) technique. More X-ray thoracic images and instances can contribute to the extraction of distinct features before being fed to the CNN model at the preprocessing stage. Since more information is available at the preprocessing stage, the CNN model’s reliability can be improved and robust in detecting SARS-nCoV. Therefore, it used X-ray images from the wavelet to retrieve some prominent features from an X-ray and then fed them to the deep CNN model. Several wavelets have been examined with their families to determine suitability to decide which wavelets to be used to extract the feature. The test accuracy (in%) for several base wavelets with one and two levels of DWT decomposition was evaluated and described in Table 3.
Table 3

Accuracy (in%) for various wavelets.

WaveletLevel
Level1Level2
Haar97.2598.35
Symlet96.6298.87
Biorthogonal95.9298.73
Coiflet96.2598.31
Daubechies98.1898.43
Discrete Meyer93.7796.90
Compared to single-level ones, base wavelets with two levels of decomposition were found to have performed remarkably well. Therefore, the rest of the wavelets, such as Haar, Symlet, Biorthogonal, Coiflet, Daubechies, Discrete Meyer, and their families, were evaluated on only two decomposition levels. Sym7 from Symlet (98.87%), Bior2.6 from Biorthogonal (98.73%), Coif5 from Coiflet (98.45%), DB2 from Daubechies (98.43%), Haar with two-level (98.35%), and Discrete Meyer with two-level (96.90%), wavelet families performed exceptionally well compared to the other wavelet families as listed in Table 4.
Table 4

Accuracy (in%) with respect to the several wavelet families for two-levels of decomposition for DWT.

Biorthogonal (%)
bior1.1bior1.3bior1.5bior2.2bior2.4bior2.6bior2.8bior3.1bior3.3bior3.5bior3.7bior3.9bior4.4bior5.5bior6.8
97.5798.1798.1797.8998.4598.7398.1796.9096.0696.6297.5396.3498.4596.6295.77
Accuracy (in%) for various wavelets. Accuracy (in%) with respect to the several wavelet families for two-levels of decomposition for DWT.

Three-class instance

Fig. 6 demonstrates the accuracy of the proposed deep convolution neural network model for three-class instances, with and without feature vectors. The figure shows the CNN model’s accuracy for the epoch sizes for a Haar wavelet with one and two levels of decomposition and raw X-ray image, i.e., without any feature extraction method.
Fig. 6

Accuracy with and without wavelet for three-class instances.

It can be concluded that the CNN model with two-level Haar decomposition has achieved greater accuracy compared to single-level DWT and raw thoracic X-ray images. The descriptive statistics of the obtained accuracy over epochs with and without wavelet for three-class instances of the Xception-I architecture are tabulated in Table 5. The maximum detection accuracy for SARS-nCoV was obtained for Haar with a two-level decomposition, i.e., 98.59%. In real-time viability, the mean and upper confidence interval (CI) of 95% for Haar with two levels of decomposition is comparatively more significant. Various performance attributes of the Xception architecture such as time per epoch and per step, training loss, training accuracy, validation loss, and validation accuracy were assessed and is shown in Table 6. Haar with two-level decomposition has taken 94 s per epoch for a total epoch size of 40 for three-class instances and has the lowest training and validation loss of 0.0290 and 0.0432, respectively.
Table 5

Descriptive statistics of the accuracy with and without wavelet for three-class instances.

Descriptive StatisticsMeanStandard DeviationSE of meanLower 95% CI of MeanUpper 95% CI of MeanMinMedianMax
Haar_Wavelet_Level197.141.570.4796.0898.2092.8297.6198.45
Haar_Wavelet_Level298.182.730.8294.3498.0188.7397.1898.59
Without Wavelet95.742.420.7394.1197.3788.8796.7997.55
Table 6

Performance comparison of Xception-I with and without wavelet for three-class instances.

Model (Xception-I)Time Taken Per EpochTime Taken Per StepTraining LossTraining AccuracyValidation LossValidation Accuracy
Haar_Wavelet_Level1173 s605 ms0.06510.97760.05670.9845
Haar_Wavelet_Level294 s330 ms0.02900.99120.04320.9859
Without Wavelet89 s412 ms0.03160.98880.08220.9755
Accuracy with and without wavelet for three-class instances. Descriptive statistics of the accuracy with and without wavelet for three-class instances. Performance comparison of Xception-I with and without wavelet for three-class instances.

Four-class instance

Fig. 7 demonstrates the accuracy of the proposed deep convolution neural network model for four-class instances. The figure shows the accuracy of the CNN model to the epoch sizes for a Haar wavelet with one and two decomposition levels. It can be concluded that irrespective of the class instances, the CNN model with two-level Haar decomposition has achieved greater accuracy than single-level DWT. Similarly, descriptive statistics of the obtained accuracy over epochs with Haar wavelet for four-class instances and their explicit performance attributes of the Xception-I architecture are tabulated in Table 7, Table 8, respectively.
Fig. 7

Xception-I demonstrates accuracy of DWT featured thoracic X-ray images for four-class instances.

Table 7

Descriptive statistics of DWT featured thoracic X-ray images for four-class instances.

Descriptive StatisticsMeanStandard DeviationSE of meanLower 95% CI of MeanUpper 95% CI of MeanMinMedianMax
Haar_Wavelet_Level194.871.020.3094.1995.5692.6295.0096.63
Haar_Wavelet_Level296.570.990.2995.9097.2494.8196.9697.85
Haar_Wavelet_Level1, 296.091.400.4295.1497.0393.4696.5497.74
Table 8

Performance comparison of Xception-I for DWT featured thoracic X-ray images for four-class instances.

Model (Xception-I)Time Taken Per EpochTime Taken Per StepTraining LossTraining AccuracyValidation LossValidation Accuracy
Haar_Wavelet_Level1120 s375 ms0.10870.96040.10300.9663
Haar_Wavelet_Level2105 s330 ms0.04140.98650.12160.9785
Haar_Wavelet_Level1, 2400 s625 ms0.04540.98560.07060.9774
Xception-I demonstrates accuracy of DWT featured thoracic X-ray images for four-class instances. Descriptive statistics of DWT featured thoracic X-ray images for four-class instances. Performance comparison of Xception-I for DWT featured thoracic X-ray images for four-class instances.

Accuracy over epoch size

Fig. 8 shows the accuracy of the proposed CNN model for four-class instances concerning different epoch sizes for a Haar wavelet with one and two decomposition levels. From Table 9, Table 10, it was observed that the proposed CNN architecture, Xception-I has a relatively low standard error (SE) of mean and standard deviation for an epoch size of 40. Also, the time per epoch is 109 s, with a validity loss of 0.0576. The accuracy of the proposed Xception-I with a 40 epoch size may be an optimal option, as it takes less time to provide less validity loss than other epoch sizes. The proposed wavelet-based deep convolution neural network architectures for various wavelets and their families were run with 40 epochs for learning rates of 0.0001, 1e−6 decay, and batch size 10.
Fig. 8

Xception-I demonstrates accuracy concerning different epoch sizes for four-class instances.

Table 9

Descriptive statistics of the accuracy in different epoch sizes for four-class instances.

Epochs /Descriptive StatisticsMeanStandard DeviationSE of meanLower 95% CI of MeanUpper 95% CI of MeanMinMedianMax
Accuracy_20epoch96.081.050.317995.3796.7994.0596.3397.34
Accuracy_40epoch97.440.650.196997.0097.8896.0897.3498.35
Accuracy_60epoch96.381.720.519095.2397.5492.5396.8498.61
Accuracy_80epoch97.370.890.270496.7797.9795.8297.7298.48
Accuracy_100epoch97.640.630.190497.2198.0696.5897.8598.61
Table 10

Performance comparison of Xception-I for four-class instances over different epoch sizes.

Model (Xception-I)Time Taken Per EpochTime Taken Per StepTraining LossTraining AccuracyValidation LossValidation Accuracy
Accuracy_20epochs203 s633 ms0.09670.96990.12400.9734
Accuracy_40epochs109 s342 ms0.04830.98560.05760.9835
Accuracy_60epochs110 s341 ms0.08080.97310.05080.9861
Accuracy_80epochs103 s322 ms0.04250.98840.06940.9848
Accuracy_100epochs195 s610 ms0.03410.99120.06040.9861
Xception-I demonstrates accuracy concerning different epoch sizes for four-class instances. Descriptive statistics of the accuracy in different epoch sizes for four-class instances. Performance comparison of Xception-I for four-class instances over different epoch sizes. The accuracy and loss plots for different CNN architectures. Confusion matrix for two-class instances with 5 and 10 fold CV.

Accuracy over wavelet component

Discrete Wavelet Transforms (DWT) implements a combination of low-pass (t) and high-pass (t) filters to achieve the approximation and detail wavelet coefficients, respectively, by downsampling the input image. The approximation coefficients are the essential sub-band and include the low-frequency and high-scale input signal components. In contrast, the detail coefficients have high-frequency and low-scale component details, such as the input image’s noise (Farghaly et al. 2020). CNN fed with wavelet featuring approximation component of thoracic X-ray images was found to have outperformed detailed components and provided much higher accuracy of SARS-nCoV detection. Table 11, the performance comparison of Xception-I over wavelet components for four-class instances is listed. One of the fascinating key findings of the proposed work to use only the wavelet approximation component for any further study of thoracic X-ray images.
Table 11

Performance comparison of Xception-I for four-class instances over wavelet components.

Wavelet ComponentTime Taken Per EpochTraining LossTraining AccuracyValidation LossValidation Accuracy
DetailDiagonal95 s1.04560.54081.03410.5535
Vertical94 s0.75510.69480.67730.7514
Horizontal98 s0.69170.72890.64590.7634

Approximation188 s0.08030.97400.05070.9887
Performance comparison of Xception-I for four-class instances over wavelet components.

Network-based architecture

First, the performance comparison attributes of network-based pre-trained deep CNN architectures, ResNet50V2, VGG16, Xception, were evaluated in Table 12. It was observed that among the three architectures compared, Xception had performed well. Although VGG16 takes less test time and the number of SARS-nCoV detection parameters in the CNN architecture, the detection accuracy has diminished significantly compared to other architectures. The proposed Xception-I architecture further improved detection accuracy using hyper-tuning parameters, hitting 98.35%. Xception-I architecture has 23.59 M parameters, with an appropriate training and validation loss of 0.0483 and 0.0576. Fig. 9 shows the accuracy and loss plots for different CNN architectures over 40 epoch runs. ResNet50V2 architecture trained the CNN model well enough, but it failed on the validation part. The training and validation performed well with other architectures with minor improvements in the accuracy of SARS-nCoV detection.
Table 12

Performance comparison of network-based pre-trained architectures.

ModelTime Taken Per EpochTime Taken Per StepTraining LossTraining AccuracyValidation LossValidation AccuracyParams (in M)
ResNet50V2125 s393 ms0.17010.93900.63840.839236.22
VGG1684 s265 ms0.26090.90950.19410.935416.81
Xception106 s332 ms0.05170.98520.05850.978536.62
Xception-I109 s342 ms0.04830.98560.05760.983523.59
Fig. 9

The accuracy and loss plots for different CNN architectures.

Performance comparison of network-based pre-trained architectures.

Experimental validation

Cross-validation is a statistical method used in this analysis to approximate the abilities of proposed deep learning to assess knowledge on new or unseen data. The procedure has a parameter, k, which refers to the k-fold Cross-Validation method. The proposed wavelet-based deep learning models were for both two-class (COVID vs. non-COVID) and four-class (COVID vs. non-COVID vs. Bacterial Pneumonia vs. Viral Pneumonia) cases validated using publicly available databases Kaggle (Tabik et al., 2020) and Github (Khan et al., 2020). Rodriguez et al. presented sensitivity analysis of k-Fold cross-validation in prediction error estimation. Rodriguez, Perez, and Lozano (2009) suggested k 5 or k 10 folds produce the lowest error since they are less biased than any other k-Fold CV technique. Initially, k 1 to 10 was used to assess the results in this study. Later, for any further analysis, 5 and 10 fold CV were used on two-class and four-class instances since they induce minor error and are less biased than any other k-Fold CV. The machine learning model was then validated with unseen data sets from public datasets, such as Kaggle and Github, with 5 and 10 fold in order to measure its robustness. From the literature, the sensitivity of thoracic X-ray images was higher than that of rRT-PCR and discussed extensively in the introduction section. The low sensitivity of PCR could be because of the low efficiency of viral nucleic acid detection, including the immature development of nucleic acid detection technology, the variability of the detection rate from different manufacturers, low patient viral load, or insufficient clinical sampling, etc. In Table 13, the average class-wise precision, recall, dice similarity coefficient (DSC) of the proposed deep learning model is presented to examine the sensitivity of the proposed wavelet-based technology for the COVID-19 testing procedure.
Table 13

Average class-wise precision, recall, DSC for four-class instances.

ClassPrecisionRecallDSC
Pneumonia Bacterial (PNA Bact.)0.980.930.97
COVID-190.990.980.99
Normal0.990.970.98
Pneumonia Viral (PNA Viral)0.961.000.98
To investigate the heterogeneity and robustness of deep learning models, the proposed xception-I was tested on both two-class and four-class instances from Kaggle and Github datasets. The 5-fold and 10-fold CV strategies are thoroughly examined in Sections 6.1, 6.2. Average class-wise precision, recall, DSC for four-class instances.

Two-class instances

For the cross-validation techniques, 2344 thoracic X-ray images that were fed to the model belongs to two-class instances (COVID vs. Non-COVID) and observed to misclassify only 46 images on 5-Fold CV and 79 on 10-Fold CV with a minimal test loss of 0.0926 and 0.0465 respectively. Fig. 10 shows the confusion matrix for two-class instances with 5 and 10 fold cross validation.
Fig. 10

Confusion matrix for two-class instances with 5 and 10 fold CV.

Four-class instances

The proposed wavelet-based deep learning xception-I model has performed remarkably well on the unseen data from different databases such as Kaggle and Github. For the cross-validation, the first 400 thoracic X-ray images were fed to the model, 393 images were correctly classified, and seven were classified as normal on a 5-fold CV. Similarly, 10-fold CV techniques were applied on the same data and observed to have only 2 misclassified images as pneumonia viral. The proposed method has also been validated on a 1004 image dataset and observed to have only 20 misclassified images. The normalized confusion matrix plot of the predictions set for validity is shown in Fig. 11.
Fig. 11

Confusion matrix for four-class instances with 5 and 10 fold CV.

The Xception-I model has predicted COVID-19 and PNA Viral groups with slightly higher precision than other groups, as seen from the confusion matrix. Since these instances have been imbalanced, one hypothesis is to produce more images with data augmentation for the cases with less data than the others and achieve approximately the same number of images. The evaluation assessment of the proposed CNN architecture Xception-I on each fold is listed in Table 14. The estimated error from each k-folds cross validation procedure is shown in Fig. 12. When compared to other folds, 10-Fold CV has the lowest error rate and the highest COVID-19 identification accuracy. In addition, the performance comparison of the proposed convolution neural network (CNN) architecture Xception-I has been compared with other literature methods.
Table 14

Evaluation assessment of Xception-I on each fold.

FoldsPrecisionRecallDSCAccuracySupport
fold10.980.980.980.9800400 X-ray
fold20.970.960.980.9700400 X-ray
fold30.980.990.980.9850400 X-ray
fold40.970.970.970.9725400 X-ray
fold50.990.990.990.9825400 X-ray
fold100.981.001.000.9950400 X-ray

Weighted Average97.8398.1698.3398.080
Fig. 12

Error estimation with respect to the k-Fold CV.

Confusion matrix for four-class instances with 5 and 10 fold CV. Numerous methods have been developed to identify the novel 2019 coronavirus disease, but only a few techniques have more than 95% identification accuracy, as listed in Table 15. However, some approaches have focused on two-class (COVID vs. non-COVID) or three-class (COVID vs. non-COVID vs. Pneumonia) instances with fewer thoracic X-ray image datasets. Only a few models were performed on four class instances (COVID vs. non-COVID vs. Bacterial Pneumonia vs. Viral Pneumonia). Thus, the proposed study contributes in a novel way to improved identification accuracy in a large number of image datasets.
Table 15

Performance comparison of CNN architecture Xception-I from literature.

AuthorsImage modalityMethod usedCOVID-19 casesAcc (%)
Xu et al.CT imagesResNet-18 + location attention219 (+)86.70
Zhang et al.X-ray imagesResNet-18 + classification head + anomaly detection head100 (+)96.00
Wang et al.X-ray imagesCOVID-Net53 (+)92.40
Sethy et al.X-ray imagesResNet50+SVM25 (+)95.38
Wang et al.CT imagesDenseNet121924 (+)80.12
Narin et al.X-ray imagesDeep CNN ResNet-5050 (+)98.00
Gozes et al.CT imagesResNet-5050 (+)94.00
Apostolopoulos et al.X-ray imagesVGG-19244 (+)93.48
Hemdan et al.X-ray imagesCOVIDX-Net25 (+)90.00
Abbas et al.X-ray imagesDeTrac105 (+)95.12
Hall et al.X-ray imagesResNet50+VGG16135 (+)94.40
Li et al.CT imagesCOVNet400 (+)90.00
Zheng et al.CT imagesDeCoVNet313 (+)90.00
Song et al.CT imagesDRE-Net777 (+)86.00
Ozturk et al.X-ray imagesDarkCovidNet125 (+)87.02
Khan et al.X-ray imagesCoroNet284 (+)89.60
Wang et al.CT imagesM-Inception195 (+)82.90
Hassantabar et al.X-ray imagesCNN100 (+)93.20
Shibly et al.X-ray imagesVGG-16 based Fast R-CNN183 (+)97.36
Proposed workX-ray imagesWavelet based Deep CNN1552 (+)98.87
Evaluation assessment of Xception-I on each fold. A few thoracic X-ray images were not correctly aligned as shown in Fig. 13, causing the CNN network to learn different features than typical datasets, resulting in a lower classification accuracy. In this study, these misaligned datasets were manually excluded before feeding them to the CNN model. A person suffering from COVID-19 with asymptomatic symptoms may or may not have infected lungs, which may be misclassified as normal or bacterial or viral pneumonia. In this investigation, only the development of illness from bacterial or viral pneumonia to COVID-19 was examined. The severity analysis of persons who are with asymptomatic and COVID-19 and its progression can be studied provided datasets are available for the same. Also, a person with COVID-19 symptoms and pneumonia (mild, moderate, or severe) may have distinct X-ray scans, which were not considered in this study.
Fig. 13

Misaligned thoracic X-ray images from collected dataset.

Error estimation with respect to the k-Fold CV. Misaligned thoracic X-ray images from collected dataset. Performance comparison of CNN architecture Xception-I from literature.

Managerial implications

The primary purpose of sensitivity analysis is to determine management implications and insights (Gharaei, Hoseini Shekarabi, Karimi, Pourjavad & Amjadian, 2019). According to optimality criteria such as the number of iterations, optimality error, infeasibility, and complementarity, the suggested technique performed exceptionally well. There are extensive reviews of alternative sensitivity analysis solution approaches that use a similar optimality criterion (Awasthi and Omrani, 2019, Gharaei, Hoseini Shekarabi et al., 2020, Gharaei, Karimi et al., 2020, Gharaei, Karimi et al., 2019, Giri and Masanta, 2020, Rabbani et al., 2020, Tsao, 2015). The proposed algorithm is used to select the optimal error based on the number of iterations in the proposed architectures, the optimizer for loss calculation, the number of epoch sizes, learning rates, and batch sizes. Furthermore, the proposed technique examines five-down precision to disrupt the learning process when generalization progress decreases. Complementarity gives match quality through differences—capabilities are complementary if they are different in a way that can be combined to create greater value (Mitsuhashi & Greve, 2009). Complementarity simulate negative externalities, in which the relative payoff of one activity drops as the number of agents playing it grows (Bramoullé, 2001). The proposed algorithm investigated complementarity and infeasibility by changing the number of epoch runs, k-fold for CV methods, and analyzing the change in COVID-19 detection accuracy (ref. Section 6). According to optimality criteria such as iterations, infeasibility, optimal error, and complementarity, the proposed method performed exceptionally well. Receiver Operating Characteristic (ROC) curve analysis was used to study the diagnostic performance of RT-PCR. However, it offers very low sensitivity, 60%–70% (Hasab, 2020). In addition, all of the essential parameters were subjected to sensitivity analysis in order to determine the impact of parameter modifications. The ROC curve for the proposed method on considering optimality criterion parameters is shown in Fig. 14. Based on the sensitivity analysis, we provided various management implications and insights to assist radiologist in their decision-making process for any further clinical analysis.
Fig. 14

Error estimation with respect to the k-Fold CV.

Error estimation with respect to the k-Fold CV.

Conclusion

In this investigation, a wavelet and the artificial intelligence-enabled testing protocol was developed for patients with SARS-nCoV through deep learning from DWT featuring thoracic X-ray images. The suggested technology, which includes DWT into the CNN model with thoracic X-ray images, outperformed CT scans and offered much greater SARS-nCoV detection accuracy. The proposed CNN architectural model tested some images on CT scans and found identical, compelling results. Most studies either worked on a small dataset or considered two (COVID-19 vs. Normal) or three-class instances (COVID-19 vs. Influenza-A-viral/Pneumonia vs. Normal) and achieved less accurate identification SARS-nCoV compared to the research work being proposed. Thus, the proposed work contributes in a novel way to detecting SARS-nCoV on a broad set of DWT featured X-ray images for a four-class instance (COVID-19 vs. Pneumonia Viral vs. Pneumonia Bacterial vs. Normal). Various performance attributes of the CNN architecture, such as time taken per epoch, time taken per step, training loss, training accuracy, validation loss, and validation accuracy, were assessed. In conclusion, the two-level Symlet 7 approximation components had the most significant test accuracy (98.87%), followed by Biorthogonal 2.6 (98.73%). Although Symlet 7 and Biorthogonal 2.6 test accuracy is outstanding, Haar and Daubechies’ k-fold cross-validation accuracy scored significantly better on unseen data using a two-level DWT. For the two-class case (COVID vs. Non-COVID), 2344 thoracic X-ray images were fed into the model. With a minimal test loss of 0.0926 and 0.0465, the model misclassified 46 images on a 5-Fold CV and 79 images on a 10-Fold CV. The following 400 thoracic X-ray images were fed into the model on a 5-fold CV, with 393 being correctly classified and seven being identified as normal in the four-class instance. Only two misclassified images as pneumonia viral were detected using 10-fold CV techniques on the same data. Besides, the proposed method was evaluated on a 1004 large image dataset, with just 20 misclassified images reported. A few limitations with collected datasets might have been the rationale for these misclassified images. In particular, with the proposed architectural models for four-class instances (COVID-19 vs. Bacterial Pneumonia vs. Viral Pneumonia vs. Normal), the precision, recall rate, and dice similarity coefficient (DSC) for COVID-19 are 98%, 98%, and 99% respectively. Xception architecture. VGG-16 architecture. Inception_ResNet50V2 architecture.

Source code and dataset

The data and codes of the proposed wavelet technology will be made accessible upon publication and will be openly accessible. The proposed study has used some of the datasets of recent COVID-19 patients collected from radiologists to assess the efficiency of the presented wavelet technology and examine the real-time viability of addressing the present pandemic situation in quick and efficient testing.

CRediT authorship contribution statement

Amar Kumar Verma: Conceptualization, Data curation, Writing – original draft, Investigation, Software. Inturi Vamsi: Methodology. Prerna Saurabh: Data curation, Validation, Investigation. Radhika Sudha: Supervision, Reviewing and Editing, Data curation. Sabareesh G.R.: Supervision, Reviewing and editing. Rajkumar S.: Supervision, Introduction and literature survey.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Table 16

Xception architecture.

Layer (Type)Output ShapeParam #
Xception (Model)(None, 5, 5, 2048)20861480
flatten_1 (Flatten)(None, 51200)0
dropout_1 (Dropout)(None, 51200)0
dense_2 (Dense)(None, 256)15360300
dense_3 (Dense)(None, 4)1204
Total params: 36,222,984 Trainable params: 36,168,456 Non-trainable params: 54,528
Table 17

VGG-16 architecture.

Layer (Type)Output ShapeParam #
vgg16 (Model)(None, 4, 4, 512)14714688
flatten_1 (Flatten)(None, 51200)0
dropout_1 (Dropout)(None, 51200)0
dense_2 (Dense)(None, 256)15360300
dense_3 (Dense)(None, 4)1204

Total params: 16,813,124 Trainable params: 16,813,124 Non-trainable params: 0
Table 18

Inception_ResNet50V2 architecture.

Layer (Type)Output ShapeParam #
resnet50v2 (Model)(None, 5, 5, 2048)23564800
flatten_1 (Flatten)(None, 51200)0
dropout_1 (Dropout)(None, 51200)0
dense_2 (Dense)(None, 256)15360300
dense_3 (Dense)(None, 4)1204

Total params: 36,673,284 Trainable params: 36,627,844 Non-trainable params: 45,440
Table 19

Wavelet-featured deep CNN Xception-I architecture.

Layer (Type)Output ShapeParam #
Xception (Model)(None, 5, 5, 2048)20861480
flatten_1 (Flatten)(None, 51200)0
dropout_1 (Dropout)(None, 51200)0
dense (Dense)(None, 50)2560050
dense_1 (Dense)(None, 100)5100
dense_2 (Dense)(None, 150)15150
dense_3 (Dense)(None, 200)30200
dense_4 (Dense)(None, 250)50250
dense_5 (Dense)(None, 300)75300
dense_6 (Dense)(None, 4)1204

Total params: 23,598,734 Trainable params: 23,544,206 Non-trainable params: 54,528
  3 in total

1.  Detection and Prevention of Virus Infection.

Authors:  Ying Wang; Bairong Shen
Journal:  Adv Exp Med Biol       Date:  2022       Impact factor: 2.622

2.  A wavelet-based deep learning pipeline for efficient COVID-19 diagnosis via CT slices.

Authors:  Omneya Attallah; Ahmed Samir
Journal:  Appl Soft Comput       Date:  2022-07-29       Impact factor: 8.263

3.  A Deep Learning and Handcrafted Based Computationally Intelligent Technique for Effective COVID-19 Detection from X-ray/CT-scan Imaging.

Authors:  Mohammed Habib; Muhammad Ramzan; Sajid Ali Khan
Journal:  J Grid Comput       Date:  2022-07-18       Impact factor: 4.674

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.