Literature DB >> 35382156

Deep learning based fusion model for COVID-19 diagnosis and classification using computed tomography images.

R T Subhalakshmi1, S Appavu Alias Balamurugan2, S Sasikala3.   

Abstract

Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.
© The Author(s) 2021.

Entities:  

Keywords:  COVID-19; Deep learning multimodal fusion; Gaussian Naïve Bayes; convolutional neural network; deeplearning; weiner filtering

Year:  2022        PMID: 35382156      PMCID: PMC8968394          DOI: 10.1177/1063293X211021435

Source DB:  PubMed          Journal:  Concurr Eng Res Appl        ISSN: 1063-293X            Impact factor:   1.038


Introduction

In recent times, Coronavirus disease, which is shortly named as COVID-19 affects people all over the globe. It is a highly dangerous and malicious disease caused by a virus that belongs to the family of Betacoronavirus called severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), previously known as 2019 new coronavirus (2019-nCoV) (Liu et al., 2020). It can be evolved from animal origins, which was transferred to human beings present in Wuhan province China, in November and December 2019. Presently, there is no medicine or vaccination available for COVID-19. It has resulted in a massive fatal rate globally and the recent death toll has reached around 244K (Vynnycky and White, 2010). COVID-19 is a virus which influences the respiratory tract and lungs of human being severely, and causes ’Pneumonia’ and many other lungs related disease. Basically, lungs are occupied with a fluid, which is inflamed and deploys patches named as ‘Ground-Glass Opacity (GGO)’. It is very difficult to diagnose the symptoms of these infections with the currently available medical services, and it is essential to develop several other clinical facilities. Even though many developers have made diverse efforts in effective diagnosis, still social distancing, wearing masks, washing hands regularly and being self-quarantined are referred to as better medications to prevent COVID-19 which is also followed by many other countries. Therefore, the major disadvantage of this lockdown and quarantine is that it highly affected the GDP of the country and many of the individuals are affected psychologically. The count of peoples affected by COVID-19 has been increased rapidly worldwide. The highly afflicted countries such as the USA, Italy and Spain have crossed the death rate of China which exhibited a maximum mortal rate globally. Traditional SARS epidemic in 2002 and 2003 was managed and it was ended by using classical controlling measures, such as restricted to travel, individual isolation and many other measures. In recent times, it is used in massive countries with COVID-19 outbreak; but this efficiency is based on how dangerous the diseases are Elsevier (2020). It applies these metrics for better prediction of transmitting COVID-19 would be highly beneficial in persuading public suggestions why it is significant for adhering to these metrics in the last decades. Acute Respiratory Distress Syndrome (ARDS) has been predicted in COVID-19 patients (Lai et al., 2020). Reverse transcription-polymerase chain reaction (RT-PCR) is the generally employed diagnostic sample of COVID-19 experiences minimum sensitivity in previous stages with prolonged test period which guides additional broadcast. Moreover, the extreme scarcity of the costlier test kit (Xie et al., 2020) exacerbating the scenarios. Due to the limited number of professionals, while the COVID-19 patients have been increasing day-by-day, an effective model named Artificial Intelligence (AI) is an automatic prediction approach that is assumed to be a significant one towards a dense limitation of testing duration. Modelling the transmission and impact of this disease is highly significant in understanding its effect. In conventional, statistical, modelling provides better methods, AI technologies are considered a major role in identifying highly qualified detecting approaches (Franquet, 2001). Deep Learning (DL) method is referred to as highly important in dealing with this critical situation. DL model is a concatenation of Machine Learning (ML) approaches which is highly concentrated on the automated feature extraction as well as image classification, while it is extensively applied for object prediction operation. Here, ML and DL methods have the strategy of using AI for mining, analysing and determining the data patterns. Reclaiming the advancements in these applications is advantageous for making a reliable decision in the healthcare field and computer-aided systems are nontrivial since the novel data has emerged. DL is evolved from Deep Convolutional Neural Networks (DCNN) which has been applied for automated mass feature extraction, accomplished by computing convolution task. The layers proceed with nonlinear data. Every layer is composed of data conversion as lower to a higher and abstract level. Mostly, the tedious system is processed. Maximum the layers of portrayal improve data portion which is highly important for segmentation and other unwanted parameters. Typically, DL contains deep networks when compared to conventional ML ones under the application of big data. Survey on COVID-19 disease which was appeared in December 2019 is a minimum. These restricted studies are used for detecting COVID-19 virus. Besides, some other studies were performed for detecting many other diseases of human beings. Simultaneously, the treatment of Coronavirus and the procedures take place which is carried out by Zhang et al. (2020). Here, a feasible condition of a patient who was recovered from COVID-19 disease was discharged and monitored regularly. COVID-19 was investigated by Darlenski and Tsankov (2020) as dermatologists. It has been provided with the effects of hygiene metrics on the skin. Chen et al. (2020) concentrated on prevention measures of COVID-19 and the efficiency of novel treatments. Holland et al. (2020) projected the kit deployed by emergency doctors to save against the COVID-19 virus. Followed by, Yang et al. (2020) determined the feasible impacts of COVID-19 virus on small kinds. The study revealed the significance of safety in children with alternate chronic status, along with newborns are isolated. Lai et al. (2020) proposed many modules to save beside symptoms of coronavirus and pneumonia in this work. Here, it is meant that only adult patients are affected chronically. Li et al. (2020) examined the genetic evolution as well as source of COVID-19 virus. Such works are presently restricted and expected to enhance rapidly. But, signs of COVID-19 disease are the same as pneumonia, and limited numbers of mortals because of COVID-19 virus are on Pneumonia disease. Togacar et al. (2019) projected an intelligent classification model for Pneumonia disease. Chest X-ray images are applied in this work. Here, DL models are applied and simulation outcomes are provided by applying AlexNet, VGG-16 and VGG-19 Net modules. The selected dataset has offered better accuracy. Sousa et al. (2013) employed automated pneumonia diagnosis in newborns by using computer-aided models from radiographic images. Thus, pneumonia data is divided with the application of three classifiers. It has been examined that the applicable classifier for accessible data is SVM. Liang and Zheng (2019) deployed a smart scheme to child Pneumonia disease. The CNN model has been applied to the disease. Earlier studies have focused on the development of artificial intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. CT images are better than X-rays due to the fact that the CT scan provides massive quantity of data which enables the doctor to manipulate data easily and also allows selectively improving or discarding structures from the images. Therefore, this paper develops deep learning based multimodal fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely pre-processing, feature extraction and classification. At the beginning stage, Wiener Filter (WF) is applied to pre-process the input CT image. Then, fusion based feature extraction model using VGG16 and Inception v4 models takes place to get the deep features. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The simulation results ensured the goodness of the DLMMF model on the applied test images. A Computed Tomography (CT) scan produces in depth structure of organs. CT images consent physicians to categorizeinner structures. Unlike conventional X-Rays, CT scans produce a set of portions of a region of the body without put over the altered body structures. Thus, CT scans give an abundant supplementary in depth image of the patient’s situation than the conventional X-Rays. This in depth data can be used to regulate whether there is a medical problem as well as their level with particular isolation of the problem. For these reasons, a number of deep learning based methodologies have been recently proposed for COVID-19 screening in CT scans (He et al., 2020; Mobiny et al., 2020; Pathak et al., 2020; Polsinelli et al., 2020; Soares et al., 2020; Wang et al., 2020). Sethy et al. (2020) proposed corona virus detection approach using X-Ray image dataset. The limitation of using the X-Ray dataset is that if the patient is in a critical situation, it is impossible to attend for X-ray scanning. In that situation, CT scans plays a vital role and includes the large amount of data and it provides, the ability of the physician to manipulate the data into various views without additional imaging of the patient. Also provide the feasibly to selectively enhance or remove structures from the images. CT scan is said to be more absorbing than simple x-rays in identifying and diagnosing the disease. CT scan is the most recommended method which produces the 3D images of the lungs. Sethy et al. (2021) performed the comparative study for the screening of COVID-19 via deep feature extraction using SVM classifier. Zubrinic et al. (2013) carried out the comparison between Naive Bayes and SVM Classifier at categorization of Concept Maps. Even the most important attributes are selected and data are prepared for learning and classification. Training and classification have been done with Naive Bayes and SVM classifier. On comparing with these in our research work, the Gaussian Naive Bayes (GNB) is applied and gives better result for n-fold cross validation. Also in our work, Deep feature of Pre-trained network is fed into the GNB classifier for cross validation gives better results. For Deep Learning Multi Modal Fusion techniques, Gaussian Naive Bayes Classifier results were calculated as average of n-fold cross validation. SVM is based on sequential minimal optimization. On observing the results of training and classification of SVM classifier we noticed that our experimental results achieve slightly better results by considering the binary attributes, and with not occurrences of each word. The upcoming sections are arranged as follows. Section 2 discusses the presented DLMMF algorithm and section 3 validates the performance of the presented model. Finally, section 4 draws a conclusion.

The proposed DLMMF model

Figure 1 shows the working process of the DLMMF method. The Figure 1 states that the DLMMF model initially performs WF based pre-processing to discard the noise that exists in it. Then, the fusion of deep features was extracted by the VGGNet 16 and Inception v4 models. In the end, the extracted features are classified by the GNB method for detecting the COVID-19 from CT images.
Figure 1.

Block diagram of DLMMF model

Block diagram of DLMMF model

Pre-processing using WF Technique

Initially, the input CT images undergo pre-processing to eliminate the noise and class labelling is carried out. In noise elimination, image pre-processing method is proposed to improve the image features corrupted by noise. In particular, adaptive filtering is applied for denoising the noise content in an image locally. Consider that a corrupted image is defined by , the noise difference across the whole image is shown by the local mean is provided by . Approximately a pixel window and local variance in a window is provided by . Next, a feasible method of denoising an image is illustrated as follows: When the noise difference across the image is equivalent to 0, When the global noise variance is tiny, and local variance is superior to the global variance after that the ration is approximately is equivalent to 1, that is, If While a huge local variance shows the occurrence of a boundary in the image window regarded. In case, if the local, as well as global variances are equivalent afterward, the equation becomes The excess analogies refer that the result is simply the mean value of the window. A filter gets the window size as input and computes the rest depends on the input image.

Feature extraction using fusion model

Here, two diverse feature extraction methods like VGGNet-16 as well as Inception v4 have been applied for extracting fusion of deep features from pre-processed CT images for COVID-19 prediction as well as categorization. The extracted features from these models are fused together and fed as input to the classifier.

CNN model

CNN comprises a set of convolutional layers to detect patterns that exist in the image. The merit of CNN is that it finds useful in the design of a very deep network with a few parameters for training purposes. It also reduces the time and difficulty in the training task. Besides, the CNN includes different layers namely convolutional, activation, pooling, Fully Connected (FC) layers and SoftMax (SM) layers. The basic concept lies in the image classification is horizontal or vertical edge recognition that is attained by executing a convolution operation on the input image. CNN considers a small square (or ‘window’) known as a filter and begins to employ it over the image. Every filter enables CNN in the identification of particular patterns in the image.

Convolution layer

It comprises a filter which convolves across the width and height of the input volume. Alternatively, the outcome of the convolution layer is attained by performing a dot product operation among the filter weight content and every location of the image. It leads to the generation of the two-dimensional activation map which offers the responses of that filter at each spatial location. Different variables that exist in the process are filter count, filter sizes, weight content, strides and padding.

Pooling layer

It intends to avoid over fitting and employs non-linear down-sampling on the activation map to minimize the dimensionality and complexity to increase the processing. Several variables exist in the processes are filter size and stride, whereas padding is not used in pooling. Besides, it is employed on every input channel separately; thereby the input and output channel count become identical. The two kinds of pooling such as max pooling and average pooling are defined below. Max pooling: The fundamental nature of this layer is identical to the convolutional layer. A major variable is that it takes maximum neighbouring value from every individual location of the input image. It is carried out on every individual channel in the input. Average pooling: In this layer, the average of every value exists adjacent to every individual location in the input image.

SoftMax layer

It is employed in the output layer of the CNN for representing the categorical distribution over labels and offers the probability of the inputs to the labels.

FC layers

The FC layer otherwise called hidden layer exists in the traditional Neural Network (NN). Prior to this process, the input array is transformed into a 1-dimensional vector by means of a flattening layer. In the FC layer, every individual node is linked to other nodes in the output layer.

VGGNet-16 model

VGG-16 is one of the popular CNN approaches with 16 layers presented by Oxford Visual Geometry Group in 2014 and it has shown standard results under various image processing applications (Xu et al., 2019). VGG16 substitute’s maximum-sized convolution filters with tiny-sized filters at the time of enhancing the depth of a system. It is due to CNN with tiny filters that are highly beneficial in enhancing the classification accuracy. The expanded configuration of layers in VGG-16 is depicted in Figure 2. The VGG-16 CNN method applied in this literature is pre-trained on ImageNet dataset and front-layers of pre-trained CNN approach are an applicable low-level universal feature that is suitable for typical image processing tasks.
Figure 2.

VGGNet-16 model.

VGGNet-16 model.

Inception v4 model

The fundamental concept of Inceptions is used in diverse training parts where monotonous blocks are segregated as various sub-networks that are suitable in showing a complete model in storage space. Hence, Inception modules are tuned easily which represents that the possibility of changing count of filters from exclusive layers which refers that it does not influence the supremacy of trained network. To improve the speed of the training process, the size of the layer has to be tuned properly to accomplish a trade-off between diverse sub-networks. Unlike other models, by using TensorFlow, the latest Inception models were made with no duplicate segmentation. It can be caused by the function of modern memory optimization for Back Propagation (BP) that has reached by activating tensors which is significant in determining gradient and estimating limited values. In addition, inception-v4 is proposed to eliminate the unwanted operation that is similar to other modules to Inception blocks in all grid sizes (Shankar et al., 2020). The entire structure of the Inception-v4 approach is showcased in Figure 3.
Figure 3.

Architecture of Inception v4 model

Architecture of Inception v4 model

Residual inception blocks

In this model, inception blocks were applied by the filter-expansion layer that is utilized for enhancing the dimensionality of the filter bank, before computing the depth of input. It is important while replacing the dimensionality cutback which is forced by the Inception block. It also composed of distinct types of Inception; Inception-v4 is moderate, because of the existence of numerous layers. The additional alteration among residual and non-residual. It is termed as Batch Normalization (BN) that is utilized for conventional layers. Hence, the model of BN in TensorFlow consumes higher memory and it is significant to limit the overall count of layers if BN is applied in required places.

Scaling of the residuals

Here, if the filter count is higher than 1000 then residual methods showcase it is instability and network is stopped at the primary stage of training that represents the destination layer prior to the pooling layer which invokes to create zeros from diverse iterations. Hence, it cannot be removed by limiting training measures. Additionally, the limited measures are appended prior to the activation layer is found and reliable in the learning process. Generally, few scaling factors are from 0.1 to 0.3 which has been applied in scaling accumulated layer activations.

GNB based classification

Once the features are extracted from the VGGNet-16 and Inception v4 models, they are fed as input to the GNB classifier, which determines the class labels of the CT images. In NB classification, the Bayesian network with one root node signifies the class and leaf nodes signify the attributes. Assume be a class label with feasible values, and be a group of attributes or features of the surroundings with a limited field where . A classification is provided with the group of the Bayesian probabilistic method with a maximum a posteriori (MAP) principle is known as discriminant function (Bustamante et al., 2006). The NB classifier is determined as follows: where is an entire allocated of attributes, that is, a novel instance to be classified, is a short to and is a short to An equation considers conditional independence among attributes. A typical method to manage constant attributes in the NB classifier utilizes Gaussian distributions for signifying the possibility of the condition of the features in the classes. So, all attributes are determined by a Gaussian probability density function (PDF) as, Gaussian probability density functions (PDF) have the shape of a bell and is determined by the following equation (7): where is the mean and is the variance. In NB, the parameters required are in the arrangement of , while is the count of attributes and is the count of classes. In particular, it is requiring determine a normal distribution to all constant attributes. Parameters of these normal distributions are attained with where is the count of instances where and is the count of entire instances utilized to trained. Computing to each class is relative frequencies such that

Experimental validation

The proposed DLMMF model has been simulated with PC with i5-8600k processor, GeForce 1050Ti 4 GB, 16 GB RAM, 250 GB SSD and 1 TB HDD. Anaconda navigator – Jupyter 5.4.0 notebook used with different python packages. The results are ensured by testing the proposed DLMMF model on CT (https://github.com/UCSD-AI4H/COVID-CT) images. Figure 4 shows the sample test images of COVID CT and Non-COVID CT images. We have compared the performance of the proposed model with existing methods available in the literature. For comparison purposes, a set of methods (Khanday et al., 2020; Pathak et al., 2020) used are Logistic Regression (LR), Multinomial Naïve Bayes (MNB), Support Vector Machine (SVM), Decision Tree (DT), bagging, AdaBoost, Stochastic Gradient Boosting (SGB), Convolutional Neural Networks (CNN), Deep Transfer Learning (DTL), Artificial Neural Network (ANN) and CNN with Long Short Term Memory (LSTM).
Figure 4.

Sample images: (a) COVID CT images and (b) non-COVID CT images.

Sample images: (a) COVID CT images and (b) non-COVID CT images. Table 1 and Figures 5 and 6 illustrates the classifier results analysis of DLMMF model in terms of different measures under different Cross Validation (CV). Under the CV 1, the presented DLMMF model has attained classifier results with the accuracy of 96.30%. Similarly, under CV 2, the projected DLMMF method has accomplished classifier results with an accuracy of 96.40%.
Table 1.

Results analysis of proposed DLMMF model in terms of various measures.

CrossvalidationSensitivitySpecificityAccuracyF-score
Fold 196.8095.8096.3096.80
Fold 296.9095.9096.4096.20
Fold 396.5095.6096.8097.10
Fold 496.3096.1097.1096.30
Fold 596.2096.3097.3096.20
Fold 696.3095.5096.9096.50
Fold 796.4095.4096.5096.80
Fold 896.7096.3096.8097.20
Fold 996.4095.4097.1097.30
Fold 1096.8095.8096.9096.90
Average 96.53 95.81 96.81 96.73
Figure 5.

CV analysis of DLMMF model in terms of sensitivity and specificity.

Figure 6.

CV analysis of DLMMF model in terms of accuracy and F-score.

Results analysis of proposed DLMMF model in terms of various measures. CV analysis of DLMMF model in terms of sensitivity and specificity. CV analysis of DLMMF model in terms of accuracy and F-score. Simultaneously, under the CV 3, the proposed DLMMF approach has reached classifier results with the accuracy of 96.80%. Likewise, under the CV 4, the applied DLMMF technology has achieved classifier results with the accuracy of 97.10%. On the other hand, under CV 5, the applied DLMMF approach has obtained classifier outcomes with the accuracy of 97.30%. In addition, under the CV 6, the utilized DLMMF technique has reached classifier outcomes with the accuracy of 96.90%. Moreover, under the CV 7, the projected DLMMF framework has reached classifier outcomes with the accuracy of 96.30%.In line with this, under the CV 8, the applied DLMMF model has reached classifier results with the accuracy of 96.80%. Moreover, under the CV 9, the implied DLMMF scheme has accomplished classifier results with the accuracy of 97.10%. At the same time, under the CV 10, the applied DLMMF approach has achieved classifier results with the accuracy of 96.90%. Table 2 and Figures 7 and 8 illustrates the comparison of the results offered by the DLMMF model in terms of sensitivity. The Figure 7 showed that the CNN is exhibited as a worst performer by attaining a least sensitivity of 87.73%. On continuing with, the DTL model has appeared to be slightly better classifier with the sensitivity of 89.61%. In line with, the SVM and Adaboost models have resulted to an exactly same sensitivity value of 91%. Simultaneously, the DT and Bagging models also led to an equivalent sensitivity of 92%. Followed by, the CNNLSTM and ANN models have shown moderate and closer sensitivity values of 92.14% and 93.78%, respectively. Moreover, the RF and SGB models lead to competitive results with the identical sensitivity of 94%.Furthermore, the LR and MNB models have displayed competitive classification performance by attaining high sensitivity values of 96%. But the DLMMF has surpassed the earlier models with the maximum sensitivity of 96.53%. The figure also portrayed that the CNN is represented as an inferior performer by accomplishing a lower specificity of 86.97%. Similarly, the SVM and Adaboost methodologies have provided accurate specificity value of 91.70%.
Table 2.

Comparative analysis of existing with proposed methods.

MethodsSensitivitySpecificityAccuracyF-score
Proposed-DLMMF96.5395.8196.8196.73
LR96.0095.4396.2095.00
MNB96.0095.4396.2095.00
SVM91.0091.7090.6086.00
DT92.0092.4092.5092.00
Bagging92.0092.4092.5092.00
Adaboost91.0091.7090.6088.00
RF94.0094.2094.3093.00
SGB94.0094.2094.3093.00
CNN87.7386.9787.3689.65
DTL89.6192.0390.7590.43
ANN93.7891.7686.0091.34
CNNLSTM92.1491.9884.1690.01
Figure 7.

Comparative analysis of DLMMF model in terms of sensitivity and specificity.

Figure 8.

Comparative analysis of DLMMF model in terms of accuracy and F-score.

Comparative analysis of existing with proposed methods. Comparative analysis of DLMMF model in terms of sensitivity and specificity. Comparative analysis of DLMMF model in terms of accuracy and F-score. Along with that, the ANN technology has exhibited a moderate classifier with the specificity of 91.76%. At the same time, the CNNLSTM and DTL methods resulted in maximum specificity measures of 91.98% and 92.03%, correspondingly. Besides, the DT and Bagging frameworks have exhibited considerable and same specificity value of 92.40%. Also, the RF and SGB approaches lead to competing results with a specificity of 94.20%. Moreover, the LR and MNB technologies have represented competing classification performance by reaching higher specificity values of 95.43%. However, the DLMMF has outperformed the previous methods with a high specificity of 95.81%. Figure 8 depicts the comparison of the results provided by the DLMMF model by means of accuracy. The figure implied that the CNNLSTM is shown as an inferior performer by accomplishing a lower accuracy of 84.16%. In line with this, the ANN method has appeared to be moderate classifier with the accuracy of 86%. Meantime, the CNN approach has resulted in maximum accuracy of 87.36%. Likewise, the SVM and Adaboost schemes have led to an exactly identical accuracy value of 90.60%. In addition, the DTL framework has represented gradual result with accuracy of 90.75%. Besides, the DT and Bagging approaches have showcased reasonable and identical accuracy value of 92.50%. Furthermore, the RF and SGB methods resulted in competing results with the same accuracy of 94.30%. Moreover, the LR and MNB schemes have exhibited competitive classification performance by accomplishing higher accuracy value of 96.20%. Thus, the DLMMF has outperformed the conventional methods with higher accuracy of 96.81%.The figure also demonstrated that the SVM is shown as a poor performer by accomplishing a minimal F-score of 86%. In line with this, the Adaboost and CNN methodology has showcased to be better classifier with the F-score values of 88% and 89.65%, correspondingly. On the other hand, the DTL and CNNLSTM schemes have exhibited considerable and same F-score values of 90.43% and 90.01%, respectively. Along with that, the ANN technique has offered maximum F-score value of 91.34%. Concurrently, the DT and Bagging models also resulted an equivalent F-score of 92%. In addition, the RF and SGB methods provided equalized results with the same F-score of 93%. Also, the LR and MNB technologies have offered competitive classification performance by reaching high F-score values of 95%. Hence, the DLMMF has performed quite-well than conventional models with the maximum F-score of 96.73%. The Confusion matrix and ROC curve of the proposed classification model is shown in Figure 9 and Figure 10 respectively.
Figure 9.

Confusion matrix for COVID-19 CT images.

Figure 10.

ROC curve for COVID-19 CT images.

Confusion matrix for COVID-19 CT images. ROC curve for COVID-19 CT images.

Conclusion

This paper has developed DLMMF technique for COVID-19 diagnosis and classification from CT images. The proposed DLMMF model operates on three main processes namely pre-processing, feature extraction and classification. Here, fusion based feature extraction model using VGG16 and Inception v4 models takes place to get the deep features. Finally, GNB based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%. As a part of future scope, the presented model can be extended to the use of the Internet of Things (IoT) enabled smart healthcare scenarios.
  18 in total

1.  Computer aid screening of COVID-19 using X-ray and CT scan images: An inner comparison.

Authors:  Prabira Kumar Sethy; Santi Kumari Behera; Komma Anitha; Chanki Pandey; M R Khan
Journal:  J Xray Sci Technol       Date:  2021       Impact factor: 1.535

2.  Deep Transfer Learning Based Classification Model for COVID-19 Disease.

Authors:  Y Pathak; P K Shukla; A Tiwari; S Stalin; S Singh; P K Shukla
Journal:  Ing Rech Biomed       Date:  2020-05-20

3.  Machine learning based approaches for detecting COVID-19 using clinical text data.

Authors:  Akib Mohi Ud Din Khanday; Syed Tanzeel Rabani; Qamar Rayees Khan; Nusrat Rouf; Masarat Mohi Ud Din
Journal:  Int J Inf Technol       Date:  2020-06-30

4.  Genetic evolution analysis of 2019 novel coronavirus and coronavirus from other species.

Authors:  Chun Li; Yanling Yang; Linzhu Ren
Journal:  Infect Genet Evol       Date:  2020-03-10       Impact factor: 3.342

5.  COVID-19 pandemic and the skin: what should dermatologists know?

Authors:  Razvigor Darlenski; Nikolai Tsankov
Journal:  Clin Dermatol       Date:  2020-03-24       Impact factor: 3.541

Review 6.  Asymptomatic carrier state, acute respiratory disease, and pneumonia due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2): Facts and myths.

Authors:  Chih-Cheng Lai; Yen Hung Liu; Cheng-Yi Wang; Ya-Hui Wang; Shun-Chung Hsueh; Muh-Yen Yen; Wen-Chien Ko; Po-Ren Hsueh
Journal:  J Microbiol Immunol Infect       Date:  2020-03-04       Impact factor: 4.399

7.  Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and coronavirus disease-2019 (COVID-19): The epidemic and the challenges.

Authors:  Chih-Cheng Lai; Tzu-Ping Shih; Wen-Chien Ko; Hung-Jen Tang; Po-Ren Hsueh
Journal:  Int J Antimicrob Agents       Date:  2020-02-17       Impact factor: 5.283

8.  Chest CT for Typical Coronavirus Disease 2019 (COVID-19) Pneumonia: Relationship to Negative RT-PCR Testing.

Authors:  Xingzhi Xie; Zheng Zhong; Wei Zhao; Chao Zheng; Fei Wang; Jun Liu
Journal:  Radiology       Date:  2020-02-12       Impact factor: 11.105

9.  A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19).

Authors:  Shuai Wang; Bo Kang; Jinlu Ma; Xianjun Zeng; Mingming Xiao; Jia Guo; Mengjiao Cai; Jingyi Yang; Yaodong Li; Xiangfei Meng; Bo Xu
Journal:  Eur Radiol       Date:  2021-02-24       Impact factor: 5.315

View more
  1 in total

1.  Fully automatic pipeline of convolutional neural networks and capsule networks to distinguish COVID-19 from community-acquired pneumonia via CT images.

Authors:  Qianqian Qi; Shouliang Qi; Yanan Wu; Chen Li; Bin Tian; Shuyue Xia; Jigang Ren; Liming Yang; Hanlin Wang; Hui Yu
Journal:  Comput Biol Med       Date:  2021-12-29       Impact factor: 6.698

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.