Literature DB >> 34926174

COVID-19 prediction through X-ray images using transfer learning-based hybrid deep learning approach.

Mohit Kumar1, Dhairyata Shakya2, Vinod Kurup3, Wanich Suksatan4.   

Abstract

Over the past few months, the campaign against COVID-19 has developed into one of the world's most sought anti-toxin treatment scheme. It is fundamental to distinguish cases of COVID-19 precisely and quickly to help avoid this pandemic from taking a wrong turn with a proper medical reasoning and solution. While Reverse-Transcription Polymerase Chain Reaction (RT-PCR) has been useful in detection of corona virus, chest X-Ray techniques has proven to be more successful and beneficial at detection of the effects of virus. With the increase in COVID patients and the X-Rays done, it is currently possible to classify the X-Ray reports with transfer learning. This paper presents a novel approach, i.e., Hybrid Convolutional Neural Network (HDCNN), which integrates Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) architecture for the finding of COVID-19 using the chest X-Ray. The transfer learning approach, namely slope weighted activation class planning (Grad-CAMs), is used with HDCNN to display images responsible for taking decisions. In this study, HDCNN is compared with other CNNs such as Inception-v3, ShuffleNet, SqueezeNet, VGG-19 and DenseNet. As a result, HDCNN has achieved an accuracy of 98.20%, precision of 97.31%, recall of 97.1% and F1 score of 0.97. Compared to other current deep learning models, the HDCNN has achieved better results, and this can be used for diagnosis purpose after proper approvals.
Copyright © 2022 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the International Conference on Advances in Materials Science.

Entities:  

Keywords:  Convolutional Neural Network; CovidGAN; DarkCovidNet; Grad-CAMs; Recurrent Neural Network

Year:  2021        PMID: 34926174      PMCID: PMC8666290          DOI: 10.1016/j.matpr.2021.12.123

Source DB:  PubMed          Journal:  Mater Today Proc        ISSN: 2214-7853


Introduction

Coronavirus (SARS-CoV-2) outbreak was first identified in Wuhan, China on 31 December 2019[1]. More than 118,000 cases and 4291 deaths were already reported by 11 March 2020, at the point when WHO pronounced it as a pandemic. Exposure to respiratory fluids that contain the virus being the principal mode of transmission of virus, it is important that the patient is isolated as early as possible in order to reduce the danger of more transmissions. Cambrian Explosion Reverse transcription-polymerase [2] and RT-PCR or gene sequencing for respiratory or blood materials are the major options of COVID-19 screening procedures. However, reports show that for neurotic swab samples, the total positive RT-PCR rate is between 30% and 60%. This leads to undiagnosed patients who can infect a large healthy population. X-ray of chest imaging is a standard tool for diagnosing pneumonia, which is quickly diagnosable easily (i.e., X-ray or CT). The diagnostic sensitivities CT scan of Chest and X-ray images show the correlations between COVID-19 visual indexes. Multilinear involvement and peripheral airspace opacity demonstrated in chest imaging reports [3]. Basic glass (57%) and mixed attenuation (29%) are the most frequently identified opacity. Early in COVID-19, in areas bordering the pulmonary vessels, the early pattern is seen and difficult to appreciate visually [4]. This also contains asymmetrical or diffuse airspace opacities in the COVID-19. Expert radiologists can only interpret these subtle abnormalities. Automated ways to identify these modest abnormalities could assist in early diagnosis, thanks to the high prevalence of suspected patients and a small number of trained radiologists. Artificial intelligence is one of the most powerful tools for solving these issues. The unavailability of access to public images of COVID-19 patients has by far precluded detailed study of reporting solutions to automated X-ray (or Chest CT) COVID-19 identification [5]. However, a dataset of X-beam pictures of people with COVID-19 has recently been compiled for AI researchers to assist them learn the way to correctly analyze COVID-19 utilizing the picture data. The X-ray, CT findings, and COVID-19 results provides the images in the papers. Our certified radiologist only uses COVID-19 for re-labelling the images on the committee, but she was careful to have a clear notice for everyone to see that it was her radiologist who designated COVID-19. Chexpert pictures also include those photos with COVID-19. A total of roughly 5000 chest X-ray pictures are used in this combined data set, which are divided into 2000 training sessions and 3000 test samples (COVID-Xray-5k) [6]. COVID-19 photos were recognized using a deep learning architecture. Instead of first extracting features and then using those features to find COVID-19 disease, we employ a deep learning approach, which looks directly for COVID-19 disease in raw images and does not require the extraction of the feature (hand-manufactured functions extraction and recognition). Convolutional neural networks (CNNs) have excelled at computer vision and image analysis projects recently, in most cases outperforming classical AI approaches [7]. Classification, segmentation, face recognition, and image improvement are just a few of the tasks for which they're used [8]. We are training convergence networks, with promising results for COVID-Xray data sets and performance analysis for detecting COVID-19 in recent years [9]. Since we have published some images of X-ray for the COVID-19 class so far, the models cannot just be trained from scratch. Two approaches to the problem of images in COVID-19 have been developed in this work: In the first approach, the CNN and RNN model with max pooling and dropout layers are applied on the images of chest X-ray [10]. In the second stage, the output generated from the hybrid approach of CNN and RNN are fine-tuned using transfer learning approach i.e. GradCAM for the recognition of the Coronavirus pictures of chest X-beam [11]. Both methodologies described above have enabled these networks to learn images from a dataset of 6000 photos, resulting in a positive evaluation on the 6000-image dataset. To calculate the performance metrics confidence interval, we are limited to only 50 samples for the COVID-19 class. This paper contains the main contributions: For COVID-19 detection, we have produced a dataset of 6000 images with different labels. This data set can provide the research community with a benchmark. The COVID-19 class images are labeled by a board-certified X-ray radiologist and are for testing purposes only with a clear sign. We trained this data set on four promising deep learning models and measured the performance of them in a test set of 1800 images. The parameters used for the evaluation of the presentation of the models are exactness, accuracy, review and F1 score. We have made publicly available the dataset, the trained models, and implementation. Given the quantity of information marked, it is essential to note that the result of these works is still preliminary and that further experiments on the large set of COVID 19 marked x-ray images are necessary. Nevertheless, that is highly encouraging, and it can be a very good future work after passing through the medical verification process. The paper's organization is as follows: The first section covers the introduction part, second describes the literature survey. The third section discusses the material and methods, followed by the end and future work. At long last, the last section mentions the references used in this paper.

Literature survey

The paper reported in [12] is implemented as pre-trained CNNs for the detection of COVID-19 from x-ray images, such as MobileNet-v2 [13], ResNet-v2 [14] VGG-19 [15] and Inception [16]. The two pretraining CNNs used two data sets, which consisted of COVID-19 images, viral pneumonia, bacterial pneumonia, and solid conditions, in the case of 2 and 3 classifications. Similar to CNN, this virus was first discovered on chest X-ray pictures in CoroNet [17]. The CNN pre-trained Xception was used as the base model [18]. CoroNet, the new educational platform that utilizes blockchain technology, offers Xception in a base model with a drop-out layer and two levels with full connectivity. Because of it, CoroNet has 33,969,964 boundaries, and of those, 33,969,964 are teachable. The remaining 54,528, however, are not teachable. The net was employed for the division of Coronavirus, pneumonia, and typical into three and four groupings respectively (Coronavirus, pneumonia bacterial, pneumonia viral and typical). Nair et al. [19] uses the CNN model Darknet-19 model, which serves as a stage for identifying constant objects. The framework's engineering has been developed to support real-time objects so that they may be seen in real time. Based on Darknet architecture, this work has constructed a DarkCovidNet model with minimal layers and filters. Looking ahead, view the DarkCovidNet architectural concept, which has features that were designed ahead of time. DarkNet design consists of five pools, and at most, 19 layers. In order to find COVID-19 on chest X-rays, the DarkCovidNet [19] is an extra CNN model developed on the DarkNet [20]. Fewer layers and filters are contained in the DarkCovidNet (gradually increased). Three different classes (COVID-19, found no results; COVID-20, found no results; and COVID-21, tested but found no results) were examined (Coronavirus no-findings and pneumonia). Gomathi et al. [21] has discussed a diagnostic study of COVID-19 and updated cost-effective and timely classifications of the disease. An automated machine learning (ML) has been developed, which offers resources and platforms to help non-ML experts study machine learning. During the classification of COVID-19 disease, this has attained an accuracy of 95 percent. As a generative adversarial cluster, CovidGAN [22] was proposed to detect the COVID-19 as a GAN-based auxiliary cluster based on generative adversarial network. The prior VGG-16 network was modified by having each module connected to one of four specially created layers and followed by a worldwide mean pooling layer, which was then completed with a 64-unit dense layer and a 0.5-probability drop-out layer [23]. Additionally, the network used the GAN (generative adversarial network) for the generation of synthetic chest X-ray images for classification performance. The Bayes-SqueezeNet, which identifies the presence of the COVID-19 using chest X-rays [24]. This suggested network includes the crude dataset and preparing the model with Bayesian enhancement, which is run in the background. A Bayes-SqueezeNet was utilized as a universal, viral, and COVID-19 software to categorize X-ray images as medical, X-ray images under construction, and X-ray images for further analysis. Finally, the net claims to be able to correct imbalanced data resulting from the use of the public databases.

Materials and methods

In this research we have used combined approach of deep learning in which we have integrated the approach of Recurrent Neural Network and Convolutional Neural Network with move learning [25]. In these stages first we will discuss the dataset and then implementation of the work will be covered.

Dataset

The Dataset in COVID-19 images of X-ray is collected through various sources to get a large dataset. For example, Dataset collected from the sources such as Github, Kaggle, Mendeley and many more [25]. In the experiment, a total of 6000 samples have been used, with 2000 for each case. In training and test sets, 1400 COVID-19 samples, 1400 pneumonia samples and 1400 normal cases trained data set were used for training, 70% of the all informational index was split into a preparation informational index, and 30% was used as a testing Dataset. For testing purposes, the remaining 600 COVID-19, 600 pneumonia samples and 600 normal case samples were used. Fig. 1 has shown the block diagram of proposed work HDCNN.
Fig. 1

Flowchart of the proposed work.

Flowchart of the proposed work.

Methodology

The proposed approach in this investigation is the integration of CNN and RNN named HDCNN (Hybrid Convolutional Neural Network) [26]. For extracting the features and for sampling into sequence CNN is used. These sequential data is fed into Recurrent Neural Network and finally we use transfer learning approach in the form of Gradient-weighted Class Activation Mapping (Grad-CAM) as an activation function. For predicting the class of given image this function will be used. In this study, deep learning architectures were used for decoding the X-ray image. We explain the fundamental ideas of CNN and RNN which get integrated for building models are given below: A. Convolutional Neural Network - The dataset in the form of X-ray images is used as a input for Convolutional Neural Network. In a CNN implementation, three major layers exist: convolution, pooling, and dense layers. Convergence operations that use the kernel require the info sign to be associated with the convolution layer before any processing is performed. Next, the function map is applied, which subsequently generates the operation outcomes. To facilitate quicker calculation, a pooling layer is placed between two convolution layers. To have all neurons in the pooling layer associated with each neuron in the fully bounded layer, in each neuron on the fully connected layer, the input signal has been classified into various classes with high levels, and all neurons in the pooling layer are linked to each neuron [27]. Fig. 2 depicts the block diagram of the convolutional neural network.
Fig. 2

Block diagram of Convolutional Neural Network.

Block diagram of Convolutional Neural Network. B. Repetitive Neural Network - In correlation with customary criticism network designs, RNN's demonstrating limit is intrinsically amazing in light of the fact that neurons convey criticism that message to the next in a similar secret layer (i.e., the contribution of a secret layer thinks about the yield from past advances), which gives memory to the authentic status of the RNN. We have a Gated Recurrent Unit (GRU) in our model to record input signals of the time series from past time data. Architecture of RNN is given in Fig. 3 .
Fig. 3

Architecture of Recurrent Neural Network.

Architecture of Recurrent Neural Network. C. HDCNN – In HDCNN (Hybrid Convolutional Neural Network), the X-ray image input is handled by two convolution layers and two pooling layers, two completely associated layers and a SoftMax function. A long convolution occurs in the first layer, and spatial electrodes are filtered in the second layer by the kernel. RNN was designed to classify the input signals with raw data divided into 700 steps and 64 units per step, based on the time (i.e., step length). Yields of the RNN module in the past 20 steps have been integrated into fully connected layers for classification. D.Gradient-weighted Class Activation Mapping - Class-specific gradient information, which enters the finished convolution layer of a CNN, is used to create a rough localization map on the main areas in an image with inclination weighted class initiation planning (Grades-CAM). Grad-CAM is a strict mapping of the activation class. Contrary to CAM, Grad-CAM does not require re-training and generally applies to all CNN-based architectures. We also show how Grad-CAM can combine existing visualizations with pixel space to generate a high-goal class-discriminatory view (Guided Grad-CAM). To better understand picture characterization, picture subtitling and visual question answering models (VQA), we generate visual explanations for grade-CAM and guided grading. Our visualization Gives insight into their failure modes in image classification models, where unreasonable forecasts have reasonable explanations and Exceeds the ILSVRC-15 weakly supervised pixel space angle perception (Guided Backpropagation what's more, Deconvolution). Our views for image captioning and VQA show that common CNN + LSTM models can often be useful for identifying discriminatory image areas even though they have not been trained on grounded image texts. Finally, we conceive and carry out human investigations to quantify if guided grade CAM clarifications help clients to rely on deep network forecasts. Interestingly, the guided grade CAM assists undeveloped clients with knowing a “more grounded” profound organization from a “more vulnerable organization”, even if the two networks are making identical predictions.

Experiment and results

The hybrid approach HDCNN is implemented using Anaconda framework that provides the environment of Python 3 programming language. The boundaries utilized for the assessment of the exhibition the proposed model is exactness, accuracy, review and F1 score. Table 1 has shown the comparison of the existing CNN models with HDCC and it is observed that HDCNN has outperformed other existing models.
Table 1

Comparison of other CNN models with HDCNN.

ModelAccuracyPrecisionRecallF1 score
Inception-v3 [28]93.6296.2090.710.94
ShuffleNet95.9795.4496.570.96
SqueezeNet87.5286.8488.290.88
VGG-1990.1687.3493.330.90
DenseNet96.2095.7896.670.96
HDCNN98.2097.3197.10.97
Comparison of other CNN models with HDCNN.

Conclusion and future work

In the current scenario, COVID-19 analysis is done through various technological advancements. One of the technologies used for the analysis is Artificial Intelligence. This study proposed the hybrid deep learning model HDCNN based on convolutional neural network and recurrent neural network using transfer learning approach Grad-CAMs. The proposed model has achieved better performance than other existing models. In the future study, we will implement COVID-19 analysis through the latest networks, such as the capsule network. The current work has analyzed chest X-beam pictures, but the coronavirus analysis can also be performed through CT scan images, MRI images, etc., in future work.

CRediT authorship contribution statement

Mohit Kumar: Investigation, Writing – original draft. Dhairyata Shakya: Conceptualization, Writing – review & editing, Supervision. Vinod Kurup: Formal analysis, Data curation. Wanich Suksatan: Conceptualization.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  14 in total

1.  An automatic approach based on CNN architecture to detect Covid-19 disease from chest X-ray images.

Authors:  Swati Hira; Anita Bai; Sanchit Hira
Journal:  Appl Intell (Dordr)       Date:  2020-11-27       Impact factor: 5.086

2.  Truncated inception net: COVID-19 outbreak screening using chest X-rays.

Authors:  Dipayan Das; K C Santosh; Umapada Pal
Journal:  Phys Eng Sci Med       Date:  2020-06-25

3.  Automated detection of COVID-19 cases using deep neural networks with X-ray images.

Authors:  Tulin Ozturk; Muhammed Talo; Eylul Azra Yildirim; Ulas Baran Baloglu; Ozal Yildirim; U Rajendra Acharya
Journal:  Comput Biol Med       Date:  2020-04-28       Impact factor: 4.589

4.  Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks.

Authors:  Ali Abbasian Ardakani; Alireza Rajabzadeh Kanafi; U Rajendra Acharya; Nazanin Khadem; Afshin Mohammadi
Journal:  Comput Biol Med       Date:  2020-04-30       Impact factor: 4.589

5.  COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images.

Authors:  Ferhat Ucar; Deniz Korkmaz
Journal:  Med Hypotheses       Date:  2020-04-23       Impact factor: 1.538

Review 6.  The SARS-CoV-2 Outbreak: an Epidemiological and Clinical Perspective.

Authors:  Rebecca S Y Wong
Journal:  SN Compr Clin Med       Date:  2020-09-29

7.  Convolutional capsule network for COVID-19 detection using radiography images.

Authors:  Shamik Tiwari; Anurag Jain
Journal:  Int J Imaging Syst Technol       Date:  2021-03-02       Impact factor: 2.177

8.  CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images.

Authors:  Asif Iqbal Khan; Junaid Latief Shah; Mohammad Mudasir Bhat
Journal:  Comput Methods Programs Biomed       Date:  2020-06-05       Impact factor: 5.428

9.  Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.

Authors:  Ioannis D Apostolopoulos; Tzani A Mpesiana
Journal:  Phys Eng Sci Med       Date:  2020-04-03

10.  SARS-CoV-2 spread across the Colombian-Venezuelan border.

Authors:  Alberto Paniz-Mondolfi; Marina Muñoz; Carolina Florez; Sergio Gomez; Angelica Rico; Lisseth Pardo; Esther C Barros; Carolina Hernández; Lourdes Delgado; Jesús E Jaimes; Luis Pérez; Aníbal A Teherán; Hala Alejel Alshammary; Ajay Obla; Zenab Khan; Jayeeta Dutta; Adriana van de Guchte; Ana S Gonzalez-Reiche; Matthew M Hernandez; Emilia Mia Sordillo; Viviana Simon; Harm van Bakel; Martin S Llewellyn; Juan David Ramírez
Journal:  Infect Genet Evol       Date:  2020-11-04       Impact factor: 3.342

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.