Literature DB >> 32501424

A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images based on the concatenation of Xception and ResNet50V2.

Mohammad Rahimzadeh1, Abolfazl Attar2.   

Abstract

In this paper, we have trained several deep convolutional networks with introduced training techniques for classifying X-ray images into three classes: normal, pneumonia, and COVID-19, based on two open-source datasets. Our data contains 180 X-ray images that belong to persons infected with COVID-19, and we attempted to apply methods to achieve the best possible results. In this research, we introduce some training techniques that help the network learn better when we have an unbalanced dataset (fewer cases of COVID-19 along with more cases from other classes). We also propose a neural network that is a concatenation of the Xception and ResNet50V2 networks. This network achieved the best accuracy by utilizing multiple features extracted by two robust networks. For evaluating our network, we have tested it on 11302 images to report the actual accuracy achievable in real circumstances. The average accuracy of the proposed network for detecting COVID-19 cases is 99.50%, and the overall average accuracy for all classes is 91.4%.
© 2020 The Authors.

Entities:  

Keywords:  COVID-19; Chest X-ray images; Convolutional neural networks; Coronavirus; Deep feature extraction; Deep learning; Transfer learning

Year:  2020        PMID: 32501424      PMCID: PMC7255267          DOI: 10.1016/j.imu.2020.100360

Source DB:  PubMed          Journal:  Inform Med Unlocked        ISSN: 2352-9148


Introduction

The pervasive spread of the coronavirus around the world has quarantined many people and crippled many industries, which has had a devastating effect on human life quality, due to the high transmissibility of coronavirus, the detection of this disease (COVID-19) plays an important role in controlling it and planning preventative measures. On the other hand, demographic conditions such as age and sex of individuals and many urban parameters such as temperature and humidity affect the prevalence of this disease in different parts of the world, which is more effective in spreading this disease [1,2]. The lack of detective tools and the limitations in their production has slowed disease detection; as a result, it increases the number of patients and casualties. The incidence of other diseases and the prevalence and number of casualties due to COVID-19 disease will decrease if it is detected quickly. The first step is detection, recognize the symptoms of the disease, and use distinctive signs to detect the coronavirus accurately. Depending on the type of coronavirus, symptoms can range from those of the common cold to fever, cough, shortness of breath, and acute respiratory problems. The patient may also have a few days of cough for no apparent reason [3]. Unlike SARS, coronavirus affects not only the respiratory system but also other vital organs, such as the kidneys and liver [4]. Symptoms of a new coronavirus leading to COVID-19 usually begin a few days after the person becomes infected, where, in some people, the symptoms may appear a little later. According to Ref. [5]; WHO [6], respiratory problems are one of the main symptoms of COVID-19, which can be detected by X-ray imaging of the chest. CT scans of the chest can also show the disease when symptoms are mild, so analyzing these images can well detect the presence of the disease in suspicious people and even without initial symptoms [7]. Using these data can also overcome the limitations of other tools, such as lack of diagnostic kits and limitations of their production. The advantage of using CT scans and X-ray images is the availability of CT scan devices and x-ray imaging systems in most hospitals and laboratories, and the ease of access to the data needed to train the network and thus detect the disease. In the absence of common symptoms such as fever, the use of CT scans and X-ray images of the chest has a relatively good ability to detect the disease [8]. The use of specialists to diagnose the disease is a common method of detecting COVID-19 in laboratories. In this method, the specialist uses the symptoms and injuries in the chest radiology image to detect COVID-19 disease from a healthy person or person that is suffering from other diseases. This procedure has significant cost [5,9]. In recent years, computer vision and Deep Learning have been used to detect many different diseases and lesions in the body automatically [10]. Some examples are: Detection of tumor types and volume in lungs, breast, head and brain [11,12]; state-of-the-art bone suppression in x-rays, diabetic retinopathy classification, prostate segmentation, nodule classification [10]; skin lesion classification, analysis of the myocardium in coronary CT angiography [13]; sperm detection and tracking 14; etc. Given that chest CT scan or X-ray images analysis is one of the methods of diagnosing COVID-19, the use of computer vision and Deep Learning can play a beneficial role in diagnosing this disease. Since the disease became widespread, many researchers have used machine vision and Deep Learning methods and obtained good results. Due to the sensitivity of the Covid-19 diagnosis, the diagnostic accuracy is one of the main challenges we face in our research. On the other hand, our focus is on increasing the detection efficiency due to the limited open-source data available. In this article, we try to improve COVID-19 detection and diminish false COVID-19 detections. This is done by combining two robust deep convolutional neural networks and optimizing the training parameters. Be- sides, we also propose a method for training the network when the dataset is imbalanced. In [8,15]; statistical analysis of CT scans was performed by several specialists and diagnosticians, who classified the suspects into several classes for diagnosis and treatment. Because of the superiority of computer vision and Deep Learning in the analysis of medical images, after the reliability of CT scans of the chest for COVID-19 detection, the researchers used these tools to diagnose COVID-19. Immediately, artificial intelligence became useful to detect the disease and measure the rate of infection and damage to the lungs using CT scans and the course of the disease, with promising results [16]. In [17]; they have used an innovative CNN to classify and predict COVID-19 using lung CT scans. [16] has used Deep Learning to detect COVID-19 and segment the lung masses caused by the coronavirus using 2D and 3D images. COVID-Net uses a lightweight residual projection-expansion- projection-extension (PEPX) design pattern to investigate quantitative analysis and qualitative analysis [18]. In another research study, pre-trained ResNet50, InceptionV3, and Inception ResNetV2 models have been used with transfer learning techniques to classify Chest X-ray images normal and COVID-19 classes [19]. In Ref. [20]; they present COVNet to predict COVID-19 from CT scans that have been segmented using U-net [21]. Another research study has combined the Human-In-The-Loop (HITL) strategy that involved a group of chest radiologists with deep learning-based methods to segment and measure infection in CT scans [22]. In Ref. [23]; they have tried to detect COVID-19 and Influenza-A-viral-pneumonia from their data; they have used classical ResNet-18 network structure to extract the features, and another Innovative CNN network uses these features by creating the location-attention oriented model to classify the data. The remainder of the paper is organized as follows: In Section 2, we describe the proposed neural network, the dataset, and training techniques. In Section 3, we have presented the experimental results, and then the paper is discussed in Section 4. In Section 5, we concluded our paper, and in the next, we presented the trained networks and the codes used in this research.

Methodology

Neural networks

Deep convolutional neural networks are useful in machine vision tasks. These have created advances in many field like Agriculture 24; medical disease diagnosis [25,26]; and industry [27]. The superiority of these networks comes from the robust and valuable semantic features they generate from input data. Here the main focus of deep networks is detecting infection in X-ray images, so classifying the X-ray images into normal, pneumonia or COVID-19. Some of the powerful and most used deep convolutional networks are VGG [28]; ResNet [29]; DenseNet [30]; Inception [31]; Xception [32]. Xception is a deep convolutional neural network that introduced new inception layers. These inception layers are constructed from depthwise convolution layers, followed by a point-wise convolution layer. Xception achieved the third-best results on the ImageNet dataset [33] after InceptionresnetV2 [34] and NasNet Large [35]. ResNet50V2 [36] is a modified version of ResNet50 that performs better than ResNet50 and ResNet101 on the ImageNet dataset. In ResNet50V2, a modification was made in the propagation formulation of the connections between blocks. ResNet50V2 also achieves a good result on the ImageNet dataset. The pre-processed input images of our dataset are 300 × 300 pixels. Xception generates a 10 × 10 × 2048 feature map on its last feature extractor layer from the input image, and ResNet50V2 also produces the same size of feature map on its final layer. As both networks generate the same size of feature maps, we concatenated their features so that by using both of the inception-based layers and residual-based layers, the quality of the generated semantic features would be enhanced. A concatenated neural network is designed by concatenating the extracted features of Xception and ResNet50V2 and then connecting the concatenated features to a convolutional layer that is connected to the classifier. The kernel size of the convolutional layer that was added after the concatenated features was 1 × 1 with 1024 filters and no activation function. This layer was added to extract a more valuable semantic feature out of the features of a spatial point between all channels, with each channel being a feature map. This convolutional layer helps the network learn better from the concatenated features extracted from Xception and ResNet50V2. The architecture of the concatenated network is depicted in Fig. 1 .
Fig. 1

The architecture of the concatenated network.

The architecture of the concatenated network.

Dataset

We have used two open-source datasets in our work. The covid chestxray dataset is taken from this GitHub repository (https://github.com/ieee8023/covid-chestxray-dataset), which has been prepared by Refs. [37]. This dataset consists of X-ray and CT scan images of patients infected to COVID-19, SARS, Streptococcus, ARDS, Pneumocystis, and other types of pneumonia from different patients. In this dataset, we only considered the X-ray images, and in total, there were 180 images from 118 cases with COVID-19 and 42 images from 25 cases with Pneumocystis, Streptococcus, and SARS that were considered as pneumonia. The second dataset was taken from (https://www.kaggle.com/c/rsna-pneumonia-detection-challenge), which contains 6012 cases with pneumonia and 8851 normal cases. We combined these two datasets, and the details are listed in Table 1 .
Table 1

Composition of the number of allocated images to training and validation set in both datasets.

DatasetCOVID-19PneumoniaNormal
covid chestxray dataset180420
rsna pneumonia detection challenge060128851
Total18060548851
Training Set14916342000
Validation Set3144206851
Composition of the number of allocated images to training and validation set in both datasets. As stated, we only had 180 cases infected with COVID-19, which is few data for a class as compared to other classes. If we had combined lots many images from normal or pneumonia classes with few COVID-19 images for training, the network would become able to detect pneumonia and normal classes very well, but not the COVID-19 class because of the unbalanced dataset. In that case, although the network cannot identify COVID-19 properly, as there are many more images of pneumonia and normal classes than the COVID-19 class, the general accuracy would become very high, but not the COVID-19 detection accuracy. This condition is not our goal because the main purpose here is to achieve good results in detecting COVID-19 cases and not to identify wrong COVID-19 cases. The best way to solve this problem is to make the dataset balanced and provide the network almost equal data of each class when training, so that the network will learn to identify all classes. Here because we do not access more open-source datasets of COVID-19 to increase this class data, we chose the number of pneumonia and normal classes almost equal to the COVID-19 number of images. We decided to train the networks for 8 consecutive phases. In each phase, we selected 250 cases of normal class and 234 cases of pneumonia class along with the 149 COVID-19 cases. In total, we had 633 cases for each training phase. All of the COVID-19 images and 34 of the pneumonia images were common between each training phase and 250 normal cases, and 200 pneumonia cases were unique in each training phase. The common 149 COVID-19 and 34 pneumonia cases between all the training phases were from the covid chestxray dataset [37]; and the rest of the data were from the other dataset. Based on this categorizing, our training set includes 8 phases and 3783 images. By doing so, the network sees an almost equal number of images for each class, so it helps to improve the COVID-19 detection along with detecting pneumonia and normal cases. But as we had more pneumonia and normal cases, we showed the network different pneumonia and normal cases with COVID-19 cases in each phase. Implementing this method results in two advantages. One is that the network learns COVID-19 class features better along with the other classes; second, the normal and pneumonia classes’ detection improves greatly. Better detecting pneumonia and normal cases means not detecting wrong COViD-19 cases, which is one of our goals. Running this method also helps the network better identify COVID-19 and not detect faulty COVID-19 cases. This method can be used for all circumstances in which there is a highly unbalanced dataset. We presented our way of allocating the images of the datasets into eight different phases as a flowchart in Fig. 3 . Some of the images of our dataset are shown in Fig. 2.
Fig. 3

The flowchart of the proposed method for training set preparation.

Fig. 2

Examples of the images in our dataset.

Examples of the images in our dataset. The flowchart of the proposed method for training set preparation.

Training phase

We described in the dataset subsection 2.2 that we allocated 8 phases for training. For reporting more reliable results, we chose five folds for training, where in every fold the training set was made of 8 phases as it is mentioned. We have trained ResNet50V2 [36]; Xception [32]; and a concatenation of Xception and ResNet50V2 neural networks based on the explained method. This concatenated Neural Network has shown higher accuracy compared to others. As we have tested several networks in our project, the Xception [32] and ResNet50V2 [36] networks work as well or better than others in extracting deep features. By concatenating the output features of both networks, we helped the network learn to classify the input image from both feature vectors, which resulted in better accuracy. The training parameters are described in Table 2 .
Table 2

In this table, we have listed the parameters and functions we used in the training procedure.

Training ParametersXceptionResNet50V2ConcatenatedNetwork
Learning Rate1e-41e-41e-4
Batch Size303020
OptimizerNadamNadamNadam
Loss FunctionCategoricalCrossentopyCategoricalCrossentopyCategoricalCrossentopy
Epochs per eachTraining Phase100100100
Horizontal/Vertical flippingYesYesYes
Zoom Range5%5%5%
Rotation Range0–360°0–360°0–360°
Width/HeightShifting5%5%5%
Shift Range5%5%5%
Re-scaling1/2551/2551/255
In this table, we have listed the parameters and functions we used in the training procedure. Based on Table 2, we trained the networks using the Categorical cross-entropy loss function and Nadam optimizer. The learning rate was set to 1e-4. We trained the network for 100 epochs in each training phase and, because of having 8 training phases, the models were trained for 800 epochs. For the Xception and ResNet50V2, we selected the batch size equal to 30. But as the concatenated network had more parameters than Xception and ResNet50V2, we set the batch size equal to 20. Data augmentation methods were also implemented to increase training efficiency and prevent the model from overfitting. We implemented the neural networks with Keras [38] library on a Tesla P100 GPU and 25 GB RAM that were provided by Google Colaboratory Notebooks.

Results

We validated our networks on 31 cases of COVID-19, 4420 cases of pneumonia, and 6851 normal cases. The reason our training data was less than the validation data is that we had a few cases of COVID-19 among many normal and pneumonia cases. Therefore, we could not use many images from the two other classes with COVID-19 fewer cases for training, because it would have made the network not learn COVID-19 features. To solve this issue, we selected 3783 images for training in 8 different phases. We evaluated our network on the remainder of the data so that our trained network's ultimate performance would be clear. It must be noticed that exceptionally, in fold3, we had 30 cases of COVID-19 for validation, and 150 other cases were allocated for training. It is noteworthy that we used transfer learning in the training process. For all of the networks, we used the pre-trained ImageNet weights [33] at the beginning of the training and then resumed training based on the explained conditions on our dataset. We also used the accuracy metric for monitoring the network results on the validation set after each epoch to find the best and most converged version of the trained network. The evaluation results of the neural networks are presented in Fig. 4 which shows the confusion matrices of each network for fold one and three. Table 3 and Table 4 show the details of our results. We reported the four different metrics for evaluating our network for each of the three class as follows:
Fig. 4

This figure shows the confusion matrix of the network for fold 1 and 3.

Table 3

This table reports the number of true and false positives and false negatives for each class.

FoldNetworkCOVID-19Correct detectedCOVID-19Not detectedCOVID-19Wrong detectedPNEUMONIACorrect detectedPNEUMONIANot detectedPNEUMONIAWrong detectedNORMALCorrect detectedNORMALNot detectedNORMALWrong detected
1Xception26510139834375696245606378
ResNet50V22749638585624806334517507
Concatenated2656837456753096526325628
2Xception2384238745464096426425528
ResNet50V22296736597615016340511713
Concatenated2382739135074346413438492
3Xception2192839424784366411440463
ResNet50V22289737706503926433418587
Concatenated2553538475733426502349550
4Xception2294238186024336411440576
ResNet50V22297840154057586065786364
Concatenated2657738605604806340511519
5Xception21104140413795026335516362
ResNet50V221104236048162846549302802
Concatenated2474339414793906442409462
Table 4

Some of the evaluation metrics have been reported in this table.

FoldNetworkAccuracyCOVID-19SensitivityPNEUMONIASensitivityNORMALSensitivityCOVID-19SpecificityPNEUMONIASpecificityNORMALSpecificityCOVID-19AccuracyPNEUMONIAAccuracyNORMALAccuracyCOVID-19PrecisionPNEUMONIAPrecisionNORMALPrecision
1Xception90.7283.8790.1191.1599.191.7391.5199.0691.1091.2920.4787.5094.29
ResNet50V290.4187.0987.2892.4599.1593.0388.6199.1290.7890.9421.9588.9392.58
Concatenated91.1083.8784.7295.2599.495.5185.8999.3591.2991.5727.6592.3791.22
2Xception91.3374.1987.6493.7999.6394.0688.1499.5691.5591.5735.3890.4592.40
ResNet50V288.6670.9682.7892.5499.4192.7283.9899.3388.8389.1724.7187.9589.89
Concatenated91.5674.1988.5293.6099.7693.6988.9599.6991.6791.774690.0192.87
3Xception91.797089.1893.5799.7593.6689.699.6791.9192.0142.8590.0493.26
ResNet50V290.4773.3385.2993.8999.1494.3086.8199.0790.7891.1118.4890.5891.63
Concatenated91.7983.3387.0394.9099.6995.0387.6499.6591.9092.0441.6691.8392.20
4Xception90.7070.9686.3893.5799.6393.7187.0699.5590.8491.0134.3789.8191.75
ResNet50V289.3870.9690.8388.5299.3188.9991.8299.2389.7189.822284.1194.33
Concatenated90.4783.8787.3392.5499.3293.0388.3499.2790.890.8925.2488.9492.43
5Xception91.9967.7491.4292.4699.6492.7191.8799.5592.2092.2333.8788.9594.59
ResNet50V290.0167.7481.5395.5999.6395.8781.9899.5490.2790.2333.3392.6989.08
Concatenated92.0877.4189.1694.0399.6294.3389.6299.5692.3192.2935.8290.9993.30
AverageXception91.3173.3588.9592.9199.5593.1789.6399.4891.5291.6233.3989.3593.26
ResNet50V289.7974.0285.5492.6099.3392.9886.6499.2690.0790.2524.0988.8591.50
Concatenated91.4080.5387.3594.0699.5694.3288.0999.5091.6091.7135.2790.8392.40
This figure shows the confusion matrix of the network for fold 1 and 3. This table reports the number of true and false positives and false negatives for each class. Some of the evaluation metrics have been reported in this table. We also reported the overall accuracy metric, defined as: In these equations, TP (True Positive) is the number of correctly classified images of a class, FP (False Positive) is the number of the wrong classified images of a class, FN (False Negative) is the number of images of a class that have been detected as another class, and TN (True Negative) is the number of images that do not belong to a class and were not classified as belonging to that class.

Discussion

It can be understood from the confusion matrices and the tables that the concatenated network performs better in detecting COVID-19 and not detecting false cases of COVID-19 and outputs better overall accuracy. Although we had an unbalanced dataset and a few cases of COVID-19, by using the proposed technique, we could have improved COVID-19 detection along with the other classes detection. The reason the precision of the COVID-19 class is low is that in our work, despite some other researches that worked on detecting COVID-19 from X-ray images, we tested our neural networks on a massive number of images. Our test images were much more than our train images. As is explained above, because we had only 31 cases of COVID-19 and 11271 cases from the other two classes, the false positives of the COVID-19 class would become more than true positives. For example, in the first fold, the concatenated network detected 26 cases correctly out of 31 COVID-19 cases, and from 11271 other cases, only mistakenly identify 68 cases as COVID-19. If we had equal samples from the COVID-19 class as from the other classes, the precision would become high value. Still, because of having few COVID-19 cases and many other cases for validation, the precision would become low in value. In another study, the results were presented in two forms, 2 and 3 classes, that due to the imbalance in the dataset, there are several meaningless results [39]. We have presented the results for each class and for all the classes with meaningful results that are more practical. We could have tested our network on a few cases like some of the other researches have done recently, but we wanted to show the real performance of our network with few COVID-19 cases. As mentioned, mistakenly detecting 68 cases from 11271 cases to be infected with COVID-19 is not very much but not very well also, and we hope that by using much-provided data from patients infected, COVID-19, the detection accuracy will rise much more.

Conclusion

In this paper, we presented a concatenated neural network based on Xception and ResNet50V2 networks for classifying the chest X-ray images into three categories of normal, pneumonia, and COVID-19. We used two open-source datasets that contained 180 and 6054 images from patients infected with COVID-19 and pneumonia, respectively, and 8851 images from normal people. As we had a few images of the COVID-19 class, we proposed a method for training the neural network when the dataset is unbalanced. We separated the training set into 8 successive phases, in which there were 633 images (149 COVID-19, 234 pneumonia, 250 normal) in each phase. We selected the number of each class almost equal to each other in each phase so that our network also learns COVID-19 class characteristics, not only the features of the other two classes. In each phase, the images from normal and pneumonia classes were different, so that the network can distinguish COVID-19 from other classes better. Our training set included 3783 images, and the rest of the images were allocated for evaluating the network. We tried to test our model on a large number of images so that our real achieved accuracy would be clear. We achieved an average accuracy of 99.50%, and 80.53% sensitivity for the COVID-19 class, and an overall accuracy equal to 91.4% between five folds. We hope that our trained network that is publicly available will be helpful for medical diagnosis. We also hope that in the future, larger datasets from COVID-19 patients become available, and by using them, the accuracy of our proposed network increases further.

Code availability

In this GitHub profile (https://github.com/mr7495/covid19), we have shared the trained networks and all the used code in this paper. We hope our work be useful to help in future researches.

Author agreement statement

We declare that this manuscript is original, has not been published before and is not currently being considered for publication elsewhere. We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that the order of authors listed in the manuscript has been approved by all of us. We understand that the Corresponding Author is the sole contact for the Editorial process. He is responsible for communicating with the other authors about progress, submissions of revisions and final approval of proofs.

Declaration of competing interest

The authors declare no competing interest.
  10 in total

Review 1.  A survey on deep learning in medical image analysis.

Authors:  Geert Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen A W M van der Laak; Bram van Ginneken; Clara I Sánchez
Journal:  Med Image Anal       Date:  2017-07-26       Impact factor: 8.545

2.  Celiac disease diagnosis from videocapsule endoscopy images with residual learning and deep feature extraction.

Authors:  Xinle Wang; Haiyang Qian; Edward J Ciaccio; Suzanne K Lewis; Govind Bhagat; Peter H Green; Shenghao Xu; Liang Huang; Rongke Gao; Yu Liu
Journal:  Comput Methods Programs Biomed       Date:  2019-11-20       Impact factor: 5.428

3.  Comprehensive electrocardiographic diagnosis based on deep learning.

Authors:  Oh Shu Lih; V Jahmunah; Tan Ru San; Edward J Ciaccio; Toshitaka Yamakawa; Masayuki Tanabe; Makiko Kobayashi; Oliver Faust; U Rajendra Acharya
Journal:  Artif Intell Med       Date:  2020-01-20       Impact factor: 5.326

4.  Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis.

Authors:  Majd Zreik; Nikolas Lessmann; Robbert W van Hamersvelt; Jelmer M Wolterink; Michiel Voskuil; Max A Viergever; Tim Leiner; Ivana Išgum
Journal:  Med Image Anal       Date:  2017-11-26       Impact factor: 8.545

5.  Emerging 2019 Novel Coronavirus (2019-nCoV) Pneumonia.

Authors:  Fengxiang Song; Nannan Shi; Fei Shan; Zhiyong Zhang; Jie Shen; Hongzhou Lu; Yun Ling; Yebin Jiang; Yuxin Shi
Journal:  Radiology       Date:  2020-02-06       Impact factor: 11.105

6.  Review of the Clinical Characteristics of Coronavirus Disease 2019 (COVID-19).

Authors:  Fang Jiang; Liehua Deng; Liangqing Zhang; Yin Cai; Chi Wai Cheung; Zhengyuan Xia
Journal:  J Gen Intern Med       Date:  2020-03-04       Impact factor: 5.128

7.  Development of an Assessment Method for Investigating the Impact of Climate and Urban Parameters in Confirmed Cases of COVID-19: A New Challenge in Sustainable Development.

Authors:  Behrouz Pirouz; Sina Shaffiee Haghshenas; Behzad Pirouz; Sami Shaffiee Haghshenas; Patrizia Piro
Journal:  Int J Environ Res Public Health       Date:  2020-04-18       Impact factor: 3.390

8.  Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans.

Authors:  Jie-Zhi Cheng; Dong Ni; Yi-Hong Chou; Jing Qin; Chui-Mei Tiu; Yeun-Chung Chang; Chiun-Sheng Huang; Dinggang Shen; Chung-Ming Chen
Journal:  Sci Rep       Date:  2016-04-15       Impact factor: 4.379

9.  Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy.

Authors:  Lin Li; Lixin Qin; Zeguo Xu; Youbing Yin; Xin Wang; Bin Kong; Junjie Bai; Yi Lu; Zhenghan Fang; Qi Song; Kunlin Cao; Daliang Liu; Guisheng Wang; Qizhong Xu; Xisheng Fang; Shiqin Zhang; Juan Xia; Jun Xia
Journal:  Radiology       Date:  2020-03-19       Impact factor: 11.105

10.  Clinical features of severe pediatric patients with coronavirus disease 2019 in Wuhan: a single center's observational study.

Authors:  Dan Sun; Hui Li; Xiao-Xia Lu; Han Xiao; Jie Ren; Fu-Rong Zhang; Zhi-Sheng Liu
Journal:  World J Pediatr       Date:  2020-03-19       Impact factor: 2.764

  10 in total
  85 in total

1.  Review of deep learning: concepts, CNN architectures, challenges, applications, future directions.

Authors:  Laith Alzubaidi; Jinglan Zhang; Amjad J Humaidi; Ayad Al-Dujaili; Ye Duan; Omran Al-Shamma; J Santamaría; Mohammed A Fadhel; Muthana Al-Amidie; Laith Farhan
Journal:  J Big Data       Date:  2021-03-31

2.  A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19).

Authors:  Md Milon Islam; Fakhri Karray; Reda Alhajj; Jia Zeng
Journal:  IEEE Access       Date:  2021-02-10       Impact factor: 3.367

3.  Applying Different Machine Learning Techniques for Prediction of COVID-19 Severity.

Authors:  Safynaz Abdel-Fattah Sayed; Abeer Mohamed Elkorany; Sabah Sayed Mohammad
Journal:  IEEE Access       Date:  2021-09-28       Impact factor: 3.367

4.  Discovery of a Generalization Gap of Convolutional Neural Networks on COVID-19 X-Rays Classification.

Authors:  Kaoutar Ben Ahmed; Gregory M Goldgof; Rahul Paul; Dmitry B Goldgof; Lawrence O Hall
Journal:  IEEE Access       Date:  2021-05-13       Impact factor: 3.367

Review 5.  Applications of artificial intelligence in battling against covid-19: A literature review.

Authors:  Mohammad-H Tayarani N
Journal:  Chaos Solitons Fractals       Date:  2020-10-03       Impact factor: 5.944

6.  Covid-19 Imaging Tools: How Big Data is Big?

Authors:  K C Santosh; Sourodip Ghosh
Journal:  J Med Syst       Date:  2021-06-03       Impact factor: 4.460

7.  COVID-19 Classification Based on Deep Convolution Neural Network Over a Wireless Network.

Authors:  Wafaa A Shalaby; Waleed Saad; Mona Shokair; Fathi E Abd El-Samie; Moawad I Dessouky
Journal:  Wirel Pers Commun       Date:  2021-05-11       Impact factor: 1.671

8.  Novel deep transfer learning model for COVID-19 patient detection using X-ray chest images.

Authors:  N Kumar; M Gupta; D Gupta; S Tiwari
Journal:  J Ambient Intell Humaniz Comput       Date:  2021-05-15

9.  Overview of deep learning models for identification Covid-19.

Authors:  Hanaa Mohsin Ahmed; Basma Wael Abdullah
Journal:  Mater Today Proc       Date:  2021-06-11

10.  COV-SNET: A deep learning model for X-ray-based COVID-19 classification.

Authors:  Robert Hertel; Rachid Benlamri
Journal:  Inform Med Unlocked       Date:  2021-05-27
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.