Literature DB >> 34898799

DC-GAN-based synthetic X-ray images augmentation for increasing the performance of EfficientNet for COVID-19 detection.

Pir Masoom Shah1,2, Hamid Ullah3, Rahim Ullah4, Dilawar Shah2, Yulin Wang1, Saif Ul Islam5, Abdullah Gani6, Joel J P C Rodrigues7,8.   

Abstract

Currently, many deep learning models are being used to classify COVID-19 and normal cases from chest X-rays. However, the available data (X-rays) for COVID-19 is limited to train a robust deep-learning model. Researchers have used data augmentation techniques to tackle this issue by increasing the numbers of samples through flipping, translation, and rotation. However, by adopting this strategy, the model compromises for the learning of high-dimensional features for a given problem. Hence, there are high chances of overfitting. In this paper, we used deep-convolutional generative adversarial networks algorithm to address this issue, which generates synthetic images for all the classes (Normal, Pneumonia, and COVID-19). To validate whether the generated images are accurate, we used the k-mean clustering technique with three clusters (Normal, Pneumonia, and COVID-19). We only selected the X-ray images classified in the correct clusters for training. In this way, we formed a synthetic dataset with three classes. The generated dataset was then fed to The EfficientNetB4 for training. The experiments achieved promising results of 95% in terms of area under the curve (AUC). To validate that our network has learned discriminated features associated with lung in the X-rays, we used the Grad-CAM technique to visualize the underlying pattern, which leads the network to its final decision.
© 2021 John Wiley & Sons Ltd.

Entities:  

Keywords:  COVID‐19; X‐rays; convolutional neural networks; deep‐convolutional generative adversarial networks; synthetic images

Year:  2021        PMID: 34898799      PMCID: PMC8646497          DOI: 10.1111/exsy.12823

Source DB:  PubMed          Journal:  Expert Syst        ISSN: 0266-4720            Impact factor:   2.812


INTRODUCTION

Corona virus disease 2019 (COVID‐19) was identified in Wuhan, China, in November 2019 for the first time. After a month, it was declared that the virus could cause several diseases like fever, cough, and lung infection. Now, patients can be found worldwide due to its fast‐spreading nature (WHO, 2020). On 30 January 2020, COVID‐19 was declared a public health emergency by World Health Organization (WHO). It was declared an emergency due to its wide spread and rapid person‐to‐person transformatio. Moreover, most of the people were not immune to it. COVID‐19 is communicable among persons and different animals like cattle, bats, cats, and camels. In the first few cases, the virus is considered to be communicated from animal to person at the seafood, and live animal market of Wuhan. On 11 March 2020, when total confirmed and registered cases were about 118,000 and deaths were above 4000, WHO announced the COVID‐19 outbreak as pandemic (e Conhecimento 2020; Shah et al., 2021). The pandemic is spreading throughout the world at a very high speed, which has never been experienced in any infectious disease. The WHO suggests social distancing as an effective technique to control the spread of the virus. Accurate and useful screening and testing is an essential step in this way so that the infected person may get proper treatment and may be isolated to halt the spread of the virus. Stat of the art method for COVID‐19 detection and measurement of antibodies caused by infection are serology and reverse transcription‐polymer chain reaction (RT‐PCR) (W. Wang et al. 2020). Another method is nucleic acid test (NAT) (D. Wang et al. 2020). It is very much challenging to detect COVID‐19 using testing kits because of their limited availability. Moreover, these tests take some time (few hours to a few days) to generate results that make it quite difficult, tedious, and time‐consuming. Apart from these, the results are prone to errors, and the false positive rate leads to dissatisfaction. A fast, reliable, accurate, and consistent testing technique is required urgently to satisfy the situation's need in this direction. Researchers in artificial intelligence (AI) have turned their focus towards the identification of novel coronavirus from images like computed tomography (CT‐scans) and X‐ray images using medical imaging techniques (Ng et al., 2020; Xu et al., 2020). The use of chest radiography in the epidemic area for initial screening of COVID‐19 is proposed recently (Ai et al., 2020). Hence, screening by radiographic images can be used to substitute the NAT and PCR methods with their better sensitivity in some situations. The study (Yasin & Gouda, 2020) claims that chest X‐rays met laboratory measures for the diagnosis of novel COVID‐19. The rate of abnormal X‐rays images in different hospitals increases during the pick of COVID‐19 around the globe. Therefore, the researchers came to the conclusion (Stephens, 2020) that chest X‐rays can be used for diagnosis of COVID‐19. Detection with X‐rays technique comes with several advantages over the PCR testing, like availability, timesaving, and so forth. However, this requires enough radiologists to engage the increasing population of COVID‐19 patients, which may not be possible in the current era. Therefore, there is a need for a Computer Aided Diagnosis (CAD) base system that can automatically interpret x‐rays or assist radiologists in decision‐making. Limited, annotated data and privacy are the major concerns in the medical research. On the other hand, deep‐learning models have revolutionized many domains by achieving human‐level accuracy on image classification tasks. In the context of COVID‐19, we are dealing with a limited amount of data. Different deep learning models have already been applied to the COVID‐19 with the limitation for classification. However, an effective model can only be obtained if the model is trained on a rich dataset. To tackle this issue of limited data in the context of COVID‐19. Waheed et al. (Waheed et al., 2020) proposed a model; they called it covidgan, which generates synthetic x‐ray images for COVID‐19. However, the generated x‐ray images were not validated. Therefore, in this paper, we propose a framework with which we generate synthetic images for the classes (COVID‐19, Normal, and Pneumonia). We used the k‐mean clustering algorithm to validate all the synthetic images. Our whole scheme is based on DC‐GAN, K‐mean clustering, and EfficientNetB4 algorithms. DC‐GAN generates synthetic images for all the classes, and then the k‐mean clustering algorithm is applied to the combination of all the classes to validate and annotate the images. At last, the EfficientNetB4 is be applied for the classification propose.To ensure that the classified results are accurate. We used CAM‐Gard to visualize the network decision on the image. Radiologists require an assistant tool to manage the growing population of COVID‐19 patients. The development of such a tool can only be possible to have a huge enough dataset for training a deep learning model. In this regard, an attempt is carried out in this study with the following contributions: • This paper extends the COVID‐19 dataset using DC‐GAN. • The validation and annotation of the generated dataset is carried out in this study. • This work provides the facilitation to train a robust model which can be used as a radiologist tool. • Finally, the learned features of CNN are visualized and explored. The rest of the paper is organized as follows: next section presents related work and Section 3 provides the details on the methodology. Section 4 demonstrate the obtained results and provides the discussion. Finally, Section 5 concludes the paper.

RELATED WORK

Brunese et al. in (Brunese et al., 2020) worked on the detection of COVID‐19 pneumonia from X‐ray images using deep learning. According to the authors, a dataset of about 6523 X‐ray images has been used to find the disease's positive cases. The process is divided into three stages. In the first stage, pneumonia is detected in the X‐ray images. In the second stage, common pneumonia is differentiated from and COVID‐19, while in the third stage, the infected area in the X‐ray image is localized. Experiments are carried out using a deep learning algorithm based on VGG‐16 by exploiting transfer learning. In response to the inefficiency, inaccuracy, and unavailability of PCR machine for detecting COVID‐19, Chowdhury et al. in (Chowdhury et al., 2020) experimented on relatively a big dataset of chest X‐ray to identify COVID‐19 cases. According to the author, the process is performed in two folds; the first model determines COVID‐19 and normal X‐ray image. The second model was trained to differentiate between viral pneumonia and COVID‐19 pneumonia. The authors in this paper have focused on collecting many datasets used in different papers at Kaggle to make a reasonably big dataset for deep learning. Different variants of CNN models are used in experiments with and without augmentation using transfer learning. About 99% accuracy has been achieved in these experiments. The early detection of COVID‐19 is essential for the timed isolation of patient may stop the spreading of the virus from further spreading. In practice, methods are slow and costly. Therefore, there is a need for automatic detection. Detection of COVID‐19 from X‐ray images has been performed by Apostolopoulos et al. in (Apostolopoulos & Mpesiana, 2020). They utilized two datasets with 1427 images and 1442 images. These datasets are collected from publically available repositories. Accuracy, sensitivity and specificity of the system using deep learning with transfer learning are 96%, 98%, and 96%, respectively. According to the authors, the detection of COVID‐19 via X‐ray is a useful addition to the traditional testing methods. It can help in maximizing the speed, efficiency, and accuracy of the tests performed in conventional ways. Ozturk et al. in (Ozturk et al., 2020), elaborated the importance of the early recovery of the COVID‐19 positive patients and then discussed the methods for the detection of the said disease. The detection of COVID‐19 in patients through CT and X‐ray has been discussed in details and it is found that these are useful for timed detection. According to this paper, the detection is first performed in binary form that is COVID versus non‐COVID. In the second approach, the detection is a multiclass classification which is COVID versus Non‐COVID versus pneumonia. They have used a dataset of 125 X‐ray images for their experiments and have obtained the accuracy of 98% for binary while 87% for the detection of COVID‐19 disease respectively. RT‐PCR is expensive and slow method for covid19 detection, however, fortunately, X‐ray of COVID‐19 infected patients have certain pattern through which it can also be detected. COVID‐19 detection from X‐ray is difficult by normal eye, but deep learning algorithms can diagnose such pattern accurately. EfficientNet family of deep learning is used with a large data set of 13,569 X‐ray images of three classes that is healthy, non‐COVID‐19 pneumonia, and COVID‐19 patients. The system is evaluated by 231 images of the said classes. The overall accuracy of the system is 93% for COVID‐19, while sensitivity is 96%. According to the authors in (Luz et al., 2020), still there is need of a big data set for the evaluation purposes before applying in practice. The CT and clinical features were examined for pregnant women and children by Lui in Chine in (Liu et al., 2020). Authors in this papers have examined the CT and clinical features for COVID‐19 in children and pregnant women, which has not been well investigated in the literature. A dataset of 59 patients having clinical and CT features with COVID‐19 from 27 January–14 February 2020 are retrospectively reviewed. Dataset includes 14 laboratory‐confirmed non‐pregnant adults, 16 laboratory confirmed and 25 clinically‐diagnosed pregnant women, with four laboratory‐confirmed children. The CT and clinical features are analysed and compared. Afterward the roadmap is made towards the diagnose of COVID‐19 using deep learning. Different datasets are used from Github with 130 COVID‐19 positive and 130 normal images, from Kaggle and Open‐I repositories. Three CNN architecture such as VGG16, ResNet50, and InceptionV3 are used in experiments. Overall, 100% accuracy is achieved for different factors. According to Salman et al in (Salman et al., 2020), this will help radiologists to release the pressure in peak time, improve diagnose on time, help in isolation and treatment on time, thus it will help in the control of COVID‐19 pandemic. Detection of COVID‐19 based on deep features is presented by Sethy et al in (Sethy et al., 2020). This papers has used a different approach. Features are extracted by a deep learning algorithm (CNN) and these features are feed to an SVM for further classification. They have used three datasets from Kaggle, Github, and Open‐i. Data set from kaggle has 25 COVID‐19 positive and 25 COVID‐19 negative cases without SARS, MERS, and ARDS. Dataset from Github has 133 images of COVID‐19 positive with SARS, MERS and ARDS. 133 COVID‐19 negative X‐ray images are collected from Open‐i. These two datasets are analysed separately by different CNN and SVM models such AlexNet, VGG16, VGG19, GoogleNet, ResNet18, ResNet50, ResNet101, InceptionV3, InceptionResNetV2, DenseNet201, and XceptionNet. ResNet are used for classification, which achieved about 92% accuracy. Maghdid et al. in (Maghdid et al., 2020) has published diagnosing of COVID‐19 pneumonia from X‐ray images and CT scan by deep learning and transfer learning techniques. The importance of AI in the detection of COVID‐19 is discussed and stated that it needs a pre‐processed dataset. Hence, the main focus of the paper is to develop a dataset for AI algorithms. A dataset is generated from multiple sources first, afterward, simple CNN is used to perform experiments on the dataset prepared. Furthermore, to accelerate and evaluate the accuracy, a modified version, and pre‐trained CNN has been used. Usefulness of chest X‐ray images for diagnosing COVID‐19 has been explored by Hall et al in (Hall et al., 2020). According to the authors, testing of COVID‐19 on RT‐PCR is not available as needed, its false negative rate is up to 30% and it takes some time. As the result is required in a very short time with high accuracy so that the spread of virus may be stopped on time, therefore there is need of another testing system. X‐ray images of chest has some patterns which can help in the diagnose of the said disease on time and X‐ray machines are widely available. In 135 chest X‐ray images with COVID‐19 positive and 320 chest X‐rays of pneumonia and viral bacteria are used in experiments. ResNet50 has been used for experiments in 10‐fold cross validation manner and 89.2% accuracy has been achieved.

PROPOSED METHODOLOGY

Radiologists require an assistant tool to manage the growing population of COVID‐19 patients. The development of such a tool can only be possible to have a huge enough dataset for training a deep learning model. The framework of this work consists of several phases. In the first phase, DC‐GAN is used to generate synthetic images. DC‐GANs were trained for all classes (COVID‐19, Pneumonia, and Normal) separately. In the next phase, generated images for all the classes were merged with the original data to validate the correctness of the generated images. Three clusters were formed using K‐mean algorithm. For 3‐classes, the value of k is set to 3. In this process, 92% of the images were correctly placed in their respective clusters and the rest 8% were discarded. In the last phase, EffecientnetB4 with some enhancements in the last layers was used for classification. As the data classes are created on assumptions basis, therefore, validation of the classifier is needed. Attenuation map was used to visualize the decision confidence on each image. The high intention of attenuation map on lung indicates that our network has learned the relevant features. The illustration of this methodology can be seen in Figure 1.
FIGURE 1

The proposed framework

The proposed framework

Initial dataset

Since we discussed earlier in the context of COVID‐19, we have minimal data available. Therefore, we approach two different datasets, Joseph Paul Cohen (Cohen et al., 2020) and Kaggle repository (Mooney, 2018). To avoid the class imbalance issue, we acquired an equal number of samples for each class. Since the class COVID‐19 consisted of the minimal number of 141 samples only, the rest of the two classes were also set to 141 samples each. The combined dataset consists of three classes (COVID‐19, Pneumonia, and Normal) with 141 × 3 = 423 instances. Furthermore, we divided the dataset into two subsets, train and test, with the ratio of 90% and 10%, respectively. The test set was kept completely separate from the training DC‐GAN as well as EffiecentNET. Table 1 provides the complete information of the dataset. Our Training set includes 126 instances of the class COVID‐19. The 127 instances belong to the Pneumonia class, while 126 cases are from the Normal class. Similarly, the testing set has 15, 14, and 15 instances for COVID‐19, Pneumonia, and Normal, respectively.
TABLE 1

Dataset details

Training setTesting setTotal
COVID‐1912615141
Pneumonia12714141
Normal12615141
Total37944423
Dataset details

Data generations with GAN

In GANs, two networks are trained at the same time in which first focuses on the generation of images while the second on discrimination (Yi et al., 2019). It is gaining importance in academia and industry because of its ineffective image generation and counteracting domain shift. GANs have obtained good performance in many image generation tasks such as super‐resolution (Ledig et al., 2017), text‐to‐image synthesis (Yang et al., 2017), and image‐to‐image translation (Zhu et al., 2017). According to rules, the patient's consent is mandatory when the diagnostic images are to be published in public domains (Clinical Practice Committee, 2000). GANs are used widely to generate synthesis images that avoid privacy and provide sufficient images for analysis. Lack of experts is another challenge to annotate medical images in supervised learning. However, some efforts are made among many healthcare agencies to build large publically available datasets. Such as, The cancer imaging archives, Biobank, radiologist society of North America, and national biomedical imaging archive, the issue is still a big challenge. Typically, training samples can be enlarged by rotation, flipping, scaling, and elastic deformation (Clinical Practice Committee, 2000). However, these do not provide sufficient variations which can be found in true samples. GANs offer data samples more synthetic and have similar attributes to actual data. It has been used in several papers for augmenting images dataset with good performance. The GANs' first invariant Deep Convolutional GAN (DC‐GAN) was initially proposed by Radford et al. (Radford et al., 2015) where both generative and discriminator are deep‐CNN. DC‐GAN is the stable version for training and modification of GAN proposed by Goodfellow et al. (Goodfellow et al., 2014), which is foundation for many recent GANs (Odena et al., 2017), (Yeh et al., 2017), (Salimans et al., 2016). The model has two neural networks where both are trained at the same time. Figure 2 shows an illustration of a typical GAN with both the networks. The first one is discriminator (known as D), which discriminates between real and fake images. It takes x as input and gives D(x). The second network is the generator (Known as G) which synthesizes the images declared as real with high probability by D. G takes input images from simple distribution , which is a uniform distribution and maps to the image space of . The main purpose of G is to succeed in getting . Networks are trained in such a way that the following loss function may be optimized. The training is such that the discriminator maximize D(x) for samples with x ~ Pdata and minimize D(x)×!~Pdata. The generator creates samples G(z) to for D in such a way that D will consider the images as real images. Based on this training, the generator improves its ability to create more realistic images while the discriminator enhances its ability to separate real images from synthesized image samples.
FIGURE 2

A generative adversarial network illustration

A generative adversarial network illustration

Generator

The network input is a vector of 100 random numbers obtained from a uniform distribution and gives a 64 × 64× 1 image as shown in Figure 3. The network (Radford et al., 2015) has a fully connected layer remodelled to 4× 4× 1024 and four fractionally‐strided convolutional to make a sample image 5× 5 kernel size. The fractional strided convolutional (deconvolution) expands pixels by adding a zero pixel in between, which enlarges the inputted image. Each layer has batch normalization except the output layer, which stabilizes the GAN network and avoids the generation collapsing to a single point (Ioffe & Szegedy, 2015). ReLU is the activation function in all the layers except the last layer, which has a tanh activation function.
FIGURE 3

The generator module of GAN

The generator module of GAN

Discriminator

It has a typical CNN that accepts input 64 × 64 × 1 (X‐ray) and makes a binary decision: is the X‐ray real or fake? The network has four convolutional layers with a kernel size of 5 × 5 and a fully connected layer. Spatial dimensionality is reduced by applying strided convolution instead of pooling layers. Each layer has used Batch‐normalization except input and output layers. Each layer has leaky ReLU activation function f(x)=max(x, leak×x) while the last layer uses Sigmoid function for the likelihood probability (0,1) score of the image sample.

Training DC‐GAN

Our real images dataset is taken from different sources. Therefore, their resolutions were different. To decrease the GPU processing, we re‐sampled the images to 64 × 64 pixels. We then trained the network for each class separately. However, the parameter setting is kept the same for all the training experiments. We performed all the experiments with 500 epochs, whereas the DC‐GAN started producing a realistic X‐ray image after 50th during the all class's during training period. Figure 4 shows the sample of synthetically generated X‐ray images.
FIGURE 4

Sample images of synthetically generated X‐rays

Sample images of synthetically generated X‐rays

Validating synthetic images by K‐mean clustering

To validate the generated images by DC‐GAN, we used the K‐mean clustering algorithm. Since we were dealing with three classes, therfore we set the . This algorithm is trained on the data with the ratio of 70:30 synthetic data and original data, respectively and in contrast, evaluated on 30:70 synthetic data and original data, respectively. To evaluate the performance of the clustering algorithm, we used accuracy, homogeneity, and inertia. We performed some other experiments with the k setting by decreasing the value of k and checked the results. However, by doing this, our accuracy, homogeneity, and inertia are compromised. Figure 5 shows the initial experiments with different settings. It can be seen in the Figure 5 that higher results are obtained when the value of k is set to 3.
FIGURE 5

Experimental results of clustering with K means

Experimental results of clustering with K means

Classification phase

Generally, any variant of CNN can be easily fitted in our framework, but we used the EfficentNetB4. EffieicentNet has the advantage of highly effective compound scaling methods. This strategy makes it possible to scale up a Convolutional neural network (Baseline) to some target resource constraints without compromising the model's effectiveness in transfer learning. The selection of EffienctNetB4 has based on its performance with the COVID‐19 dataset in our previous study (Under review). The Pre‐Trained EffientNETB4 is used as the feature extractor. While for classification, we added fully‐connected layers at the end, consisting of three‐dense layers. To reduce the dimensionality of features learn by EffientNETB4, we set a max‐pooling layer before the fully‐connected layer and after the EffientNETB4. No, any local library or software is used in the study. To reproduce the results, the reader can use GPU to enable Kaggle‐notebook. Kaggle provides free access to NVIDIA K80 GPUs.

PERFORMANCE EVALUATION

In this section, we discussed our results and experiments. We carried out several experiments with different parameter settings. The following subsection explains each experiment and its effect on results. It is necessary to mention here that the test set in all the experiments remained the same, separate from the training set. Mover all the images of the test set are real images.

Experiment 1

In this experiment, we use EfferententB4. We started the model training from scratch on the available dataset (real images). This experiment training set comprises 296 number of instances, whereas the test set has 127 instances. The training process was continued up to 100 epochs. However, it needs to mention here that the number of epochs for all the experiments has remained the same (i–e 100 epochs). In this experiment, we recorded AUC of 89%.

Experiment 2

In this experiment, we used the same number of instances for training and testing; however, we fine‐tune our previous EfficientnetB4 model. By utilizing the technique, the model convergence faster than the previous experiment. The AUC on the test set is recorded 92%.

Experiment 3

The number of training samples is increased by adding the synthetic images. The ratio of real and synthetic images has remained the same as 1:1. Moreover used followed the strategy of transfer learning here also. By training the model on synthetic images, the AUC on the test‐set is increased by 3% than experiment second.

Experiment 4

The experiment fourth was performed with the same setting as experiment third. However, we increased the training set by 1:2 real and synthetic images, respectively. This method affects the results positively of 96%, in terms of AUC.

Discussion

The Table 2 represent all results comparison of all the experiments. The first row represents our first experiment in the table, which was performed on training CNN from scratch on real images. Similarly, row two represents the second experiment, where the model is trained on real images. However, we used the transfer learning technique in the experiment which affected the result positively. The detail of experiment no. 3 is presented in row 3. Moreover, in this experiment, we used the same network parameters. However, we added synthetic images in training the CNN model. The ratio of synthetic and real images is set to 1:1, respectively. In the same way, we used synthetic and real images with the ratio 2:1 respectively for the 4th experiment, which is shown in Table 2 row 4. All the experiments demonstrate that the AUC increases when transfer learning and synthetic images are added in training, confirming that the synthetic images have significant insights related to all the classes.
TABLE 2

Experiments details

Experiment no.Training datasetTrainingNo. of instancesAUC for all the classes (on test set)
1RealFrom scratch42389
2RealTransfer learning42392
3Real + syntheticTransfer learning84695
4Real +2× syntheticTransfer learning126996
Experiments details

Class activation maps

Developing a more robust understanding of deep learning models is an essential field of study. Deep convolutional neural networks are also known as black‐box models due to the lack of information about their internal actions. In order to create explainable deep learning models, several researchers have recently proposed methods for using class activation maps (CAMs) that display deep‐learning predictions to assist human experts in developing understandable deep learning models. A more descriptive input picture relating to the final model prediction for each class is emphasized in the author's proposed methods for gradient‐based CAM (i.e., Grad‐CAM) production. The availability of such information, along with the model's predictions, plays a very important role in development of trustworthiness in deep learning‐based algorithms. Apart from it, The Grad‐CAM also enables a human expert (doctor) to verify the efficiency of deep learning. Our network has learned the actual features associated with COVID‐19 for the purpose to provide a comparative understanding of the model predictions and validation. We have visualize the attention maps for all the classes. The Figures 6, 7, 8 depict the input image, model estimation, and corresponding Grad‐CAMs of the proposed model, for normal, COVID‐19 and pneumonia patients respectively. The left column represents the input images and the second column shows the Grad‐CAMs of the respective image in the aforementioned figures.
FIGURE 6

Normal: True‐negative samples from the test set along with Grad‐CAM

FIGURE 7

COVID‐19: True‐positive samples from the test set along with Grad‐CAM

FIGURE 8

Pneumonia: True‐positive samples from the test set along with Grad‐CAM

Normal: True‐negative samples from the test set along with Grad‐CAM COVID‐19: True‐positive samples from the test set along with Grad‐CAM Pneumonia: True‐positive samples from the test set along with Grad‐CAM

LIMITATIONS AND FUTURE WORK

There are still several flaws in this study, like GAN architecture and training may be tuned by applying better techniques. Likewise, we selected a tiny dataset due to time constraints. In the same way, there are certain obstacles to get and add more labeled data into the learning process of GAN. The quality of the synthetic generated x‐ray images with DC‐GAN can be improved. In the future, we aim to apply Progressive Growing GAN for improved generated images.

CONCLUSION

Preliminary findings demonstrate that synthetically generated x‐ray images of COVID‐19 with GAN can contribute to accurate detection of COVID‐19. We designed a framework that could learn high‐dimensional features despite the problem of having limited data. Such results demonstrated how GANs and deep learning could substantially help in the detection of COVID‐19 patients using X‐ray scans. Moreover, it also provided an excellent example for other researchers interested in applying this method to real‐world problems. Lastly, we believe it can be a beneficial tool for clinical practitioners and radiologists to speed up testing, detection, and follow‐up of COVID‐19 cases.

CONFLICT OF INTEREST

The authors declare that there is no conflict of interest regarding the publication of research work carried out in this paper and about the order of the authors in the manuscript.
  14 in total

1.  Detection of SARS-CoV-2 in Different Types of Clinical Specimens.

Authors:  Wenling Wang; Yanli Xu; Ruqin Gao; Roujian Lu; Kai Han; Guizhen Wu; Wenjie Tan
Journal:  JAMA       Date:  2020-05-12       Impact factor: 56.272

2.  Clinical Characteristics of 138 Hospitalized Patients With 2019 Novel Coronavirus-Infected Pneumonia in Wuhan, China.

Authors:  Dawei Wang; Bo Hu; Chang Hu; Fangfang Zhu; Xing Liu; Jing Zhang; Binbin Wang; Hui Xiang; Zhenshun Cheng; Yong Xiong; Yan Zhao; Yirong Li; Xinghuan Wang; Zhiyong Peng
Journal:  JAMA       Date:  2020-03-17       Impact factor: 56.272

3.  CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection.

Authors:  Abdul Waheed; Muskan Goyal; Deepak Gupta; Ashish Khanna; Fadi Al-Turjman; Placido Rogerio Pinheiro
Journal:  IEEE Access       Date:  2020-05-14       Impact factor: 3.367

4.  Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases.

Authors:  Tao Ai; Zhenlu Yang; Hongyan Hou; Chenao Zhan; Chong Chen; Wenzhi Lv; Qian Tao; Ziyong Sun; Liming Xia
Journal:  Radiology       Date:  2020-02-26       Impact factor: 11.105

5.  Imaging Profile of the COVID-19 Infection: Radiologic Findings and Literature Review.

Authors:  Ming-Yen Ng; Elaine Y P Lee; Jin Yang; Fangfang Yang; Xia Li; Hongxia Wang; Macy Mei-Sze Lui; Christine Shing-Yen Lo; Barry Leung; Pek-Lan Khong; Christopher Kim-Ming Hui; Kwok-Yung Yuen; Michael D Kuo
Journal:  Radiol Cardiothorac Imaging       Date:  2020-02-13

6.  Explainable Deep Learning for Pulmonary Disease and Coronavirus COVID-19 Detection from X-rays.

Authors:  Luca Brunese; Francesco Mercaldo; Alfonso Reginelli; Antonella Santone
Journal:  Comput Methods Programs Biomed       Date:  2020-06-20       Impact factor: 5.428

7.  DC-GAN-based synthetic X-ray images augmentation for increasing the performance of EfficientNet for COVID-19 detection.

Authors:  Pir Masoom Shah; Hamid Ullah; Rahim Ullah; Dilawar Shah; Yulin Wang; Saif Ul Islam; Abdullah Gani; Joel J P C Rodrigues
Journal:  Expert Syst       Date:  2021-10-19       Impact factor: 2.812

8.  Clinical and CT imaging features of the COVID-19 pneumonia: Focus on pregnant women and children.

Authors:  Huanhuan Liu; Fang Liu; Jinning Li; Tingting Zhang; Dengbin Wang; Weishun Lan
Journal:  J Infect       Date:  2020-03-21       Impact factor: 6.072

9.  Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.

Authors:  Ioannis D Apostolopoulos; Tzani A Mpesiana
Journal:  Phys Eng Sci Med       Date:  2020-04-03

10.  Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets.

Authors:  Stephanie A Harmon; Thomas H Sanford; Sheng Xu; Evrim B Turkbey; Holger Roth; Ziyue Xu; Dong Yang; Andriy Myronenko; Victoria Anderson; Amel Amalou; Maxime Blain; Michael Kassin; Dilara Long; Nicole Varble; Stephanie M Walker; Ulas Bagci; Anna Maria Ierardi; Elvira Stellato; Guido Giovanni Plensich; Giuseppe Franceschelli; Cristiano Girlando; Giovanni Irmici; Dominic Labella; Dima Hammoud; Ashkan Malayeri; Elizabeth Jones; Ronald M Summers; Peter L Choyke; Daguang Xu; Mona Flores; Kaku Tamura; Hirofumi Obinata; Hitoshi Mori; Francesca Patella; Maurizio Cariati; Gianpaolo Carrafiello; Peng An; Bradford J Wood; Baris Turkbey
Journal:  Nat Commun       Date:  2020-08-14       Impact factor: 14.919

View more
  4 in total

1.  Yanghe Decoction Effectively Alleviates Lung Injury and Immune Disorder in Asthmatic Mice Induced by Ovalbumin.

Authors:  Cui Li; Liming Tian; Yuwei Jiang; Yu Wang; Lingna Xue; Shaoyan Zhang; Xianwei Wu; Xing Huang; Lei Qiu; Zifeng Ma; Zhenhui Lu
Journal:  Comput Intell Neurosci       Date:  2022-05-05

2.  An Improved COVID-19 Detection using GAN-Based Data Augmentation and Novel QuNet-Based Classification.

Authors:  Usman Asghar; Muhammad Arif; Khurram Ejaz; Dragos Vicoveanu; Diana Izdrui; Oana Geman
Journal:  Biomed Res Int       Date:  2022-02-26       Impact factor: 3.411

3.  DC-GAN-based synthetic X-ray images augmentation for increasing the performance of EfficientNet for COVID-19 detection.

Authors:  Pir Masoom Shah; Hamid Ullah; Rahim Ullah; Dilawar Shah; Yulin Wang; Saif Ul Islam; Abdullah Gani; Joel J P C Rodrigues
Journal:  Expert Syst       Date:  2021-10-19       Impact factor: 2.812

4.  How much BiGAN and CycleGAN-learned hidden features are effective for COVID-19 detection from CT images? A comparative study.

Authors:  Sima Sarv Ahrabi; Alireza Momenzadeh; Enzo Baccarelli; Michele Scarpiniti; Lorenzo Piazzo
Journal:  J Supercomput       Date:  2022-08-26       Impact factor: 2.557

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.