Literature DB >> 35945966

Detection of COVID-19 from chest X-ray images: Boosting the performance with convolutional neural network and transfer learning.

Sohaib Asif1,2, Yi Wenhui1, Kamran Amjad1, Hou Jin3, Yi Tao4, Si Jinhai1.   

Abstract

Coronavirus disease (COVID-19) is a pandemic that has caused thousands of casualties and impacts all over the world. Most countries are facing a shortage of COVID-19 test kits in hospitals due to the daily increase in the number of cases. Early detection of COVID-19 can protect people from severe infection. Unfortunately, COVID-19 can be misdiagnosed as pneumonia or other illness and can lead to patient death. Therefore, in order to avoid the spread of COVID-19 among the population, it is necessary to implement an automated early diagnostic system as a rapid alternative diagnostic system. Several researchers have done very well in detecting COVID-19; however, most of them have lower accuracy and overfitting issues that make early screening of COVID-19 difficult. Transfer learning is the most successful technique to solve this problem with higher accuracy. In this paper, we studied the feasibility of applying transfer learning and added our own classifier to automatically classify COVID-19 because transfer learning is very suitable for medical imaging due to the limited availability of data. In this work, we proposed a CNN model based on deep transfer learning technique using six different pre-trained architectures, including VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, and EfficientNetB0. A total of 3886 chest X-rays (1200 cases of COVID-19, 1341 healthy and 1345 cases of viral pneumonia) were used to study the effectiveness of the proposed CNN model. A comparative analysis of the proposed CNN models using three classes of chest X-ray datasets was carried out in order to find the most suitable model. Experimental results show that the proposed CNN model based on VGG16 was able to accurately diagnose COVID-19 patients with 97.84% accuracy, 97.90% precision, 97.89% sensitivity, and 97.89% of F1-score. Evaluation of the test data shows that the proposed model produces the highest accuracy among CNNs and seems to be the most suitable choice for COVID-19 classification. We believe that in this pandemic situation, this model will support healthcare professionals in improving patient screening.
© 2022 John Wiley & Sons Ltd.

Entities:  

Keywords:  COVID‐19 detection; VGG16; chest X‐rays; deep CNN; medical image analysis; transfer learning

Year:  2022        PMID: 35945966      PMCID: PMC9353436          DOI: 10.1111/exsy.13099

Source DB:  PubMed          Journal:  Expert Syst        ISSN: 0266-4720            Impact factor:   2.812


INTRODUCTION

The novel coronavirus (COVID‐19) is an acutely fatal disease that originated in Wuhan, China in December 2019 and spread globally. COVID‐19 outbreak has always been a great concern of the health community because no cure has been discovered (Rothe et al., 2020). On 11th March 2020, World health Organization (WHO) declared this disease as a pandemic (World Health Organization, 2020). The structure of COVID‐19 composed of single stranded RNA. Due to its mutating feature, this disease is very hard to cure. The new coronavirus disease is firstly a throat infection, and suddenly people have trouble in breathing. The other common symptoms of this virus are headache, fever, shortness of breath and cough. These symptoms do not indicate the COVID‐19, but in some cases, it leads to pneumonia. This can lead us to diagnostic problem of the healthcare professional. To protect the healthy people from COVID‐19, infected patients have to put in isolation, do proper examination and take other protective measures (Roosa et al., 2020). COVID‐19 is a contagious disease, which can transfer from one person to another thorough breath, hand and mucus contacts (Yan et al., 2020). This virus mostly attacks the person with weak immune system but healthy people can also have this disease (Lancet, 2020; Razai et al., 2020). COVID‐19 has different types and found in animals too. Despite the introduction of vaccines worldwide, some countries in the world still have new cases and new death records every day. Most of the new cases reported are due to the second and third wave, requiring immediate and long‐term solutions for early detection of this disease. This is very important to find a solution for early detection to prevent further spread of COVID‐19. Polymerize chain reaction (PCR) test is to diagnose the COVID‐19 but the time required for this test is relatively long and it is expensive. In this type of test, a long swab is inserted into the patient's nostrils or back of the throat to get the sample and the results obtained within few days. In some cases, test results are negative, even if the patient shows progress on subsequent computed tomography (CT). Due to the limited availability of RT‐PCR in some countries, it is recommended to use CT‐scan and X‐rays. By using CT‐scans and X‐rays, the detection of coronavirus symptoms has high accuracy in the lower parts of the lungs. In some cases, CT‐scans and X‐rays can also be used as a substitution of RT‐PCR test. There are limited numbers of radiologists in the world and the people infected with this virus are quite a lot. There are also people who re‐examine themselves to know the progress of this disease. To overcome the problem of delaying reports and assist radiologists, there is a need to develop non‐invasive system for the early detection of Covid‐19 using state of the art artificial intelligence (AI) tools. Deep learning is a branch of machine learning with three or more layers called neural networks. These neural networks behave like a human brain and learn from a large amount of data. Artificial intelligence (AI) and deep learning is being used in different fields of biomedical research such as medical image analysis (AlZu'bi, Mughaid, et al., 2019), drug discovery, blood donation (AlZu'bi, Aqel, & Mughaid, 2021) and biomedicine. There are different types of models presented by researcher to diagnose, segment and predict the disease such as lungs and skin cancer detection, diabetes prediction and detection, heart disease detection, dengue and Parkinson's disease etc. AI is used in image classification, detection and medical image segmentation (AlZu'bi et al., 2020; Al‐Zu'bi et al., 2017; Al‐Zu'bi et al., 2021). Strong GPU support and the size and quality of the dataset are also factors that positively impact the network accuracy (AlZu'bi et al., 2018). Using computer‐aided applications, Artificial intelligence (Li, Qin, et al., 2020) and deep learning (Li, Wang, et al., 2020) plays an important role in identify and classify COVID‐19 cases, and have achieved excellent results (Liu et al., 2020). Deep learning models can also be used to forecast the nature of the virus and helps control the spread of this disease (Wieczorek et al., 2020). Chest X‐rays and computed tomography (CT) are the most widely used methods for processing medical images and have been used by many researches to develop models that can help radiologist predict disease. Deep learning methods produces state of the art results in medical image analysis. Recent works (LeCun et al., 2015; Litjens et al., 2017; Shin et al., 2016) using computer vision (CV) and artificial intelligence (AI) techniques, including the use of deep learning (DL) models, especially convolutional neural networks (CNNs) has proven to be a useful method for examining medical images. CNNs are widely used to automate the analysis of infections in chest radiographs to detect COVID‐19 imaging patterns. However, one of the main limitations of current approaches is overfitting issues due to the limited amount of related COVID‐19 data that is publicly available for use, making it much more difficult to create an efficient model with overall high performance. Some existing models reported lower accuracy scores and needed to be validated on a large dataset, and there was no evidence that standard performance measures such as Matthew's correlation coefficient were used in their approaches. Therefore, it is very important to develop a robust and effective system to automatically diagnose COVID‐19. The aforementioned problems are addressed in this paper. The results of this study are promising and validate the efficiency of transfer learning in the automatic detection of COVID‐19 disease. In our previous study in December 2020, we used the InceptionV3 model to diagnose COVID‐19 using chest X‐rays (Asif et al., 2020). We use the transfer learning method and remove the last layer of the InceptionV3 model, and then use the softmax activation function to retrain the last layer on the chest X‐ray dataset. However, due to the lack of a COVID‐19 dataset, we were only able to train our model on 864 COVID‐19 images. After training the InceptionV3 model, we achieved a validation accuracy of 93% on the COVID‐19 chest X‐ray dataset. This work is an updated version of our previous work, but the method has been greatly expanded and improved. The shortcomings of the existing work identified by the current study led us to propose a deep learning‐based system for efficient and cost‐effective detection of COVID‐19 with low misclassification rates. The diagnostic system presented in this paper utilizes a pre‐trained model based on the concept of transfer learning to extract distinctive features present in chest radiographs of COVID‐19 patients. Transfer learning is the best strategy to achieve accurate feature extraction and classification. It will save time and computing power and provide higher accuracy than competing models. Deep transfer learning was performed on six popular deep learning architectures (VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, and EfficientNetB0) to overcome the problem of insufficient training data for deep learning. An open dataset of chest X‐ray images from the Kaggle repository was used to evaluate the performance of the proposed study. The dataset consists of 3886 chest X‐ray images, of which 1200 are COVID‐19, 1341 normal chest X‐rays, and 1345 viral pneumonia images. Each model consists of two parts: a feature extractor and a classifier. The feature extractor consists of a convolutional base layers to extract features from the images, while the classifier serves to classify the extracted features. We retain the convolutional base layers and customize the final classification layer by adding new sets of layers such as multiple dense layers and batch normalization layers to alleviate the gradient disappearance problem, reduce the need for dropout, produce faster training, enhance learning, improve accuracy, and reduce overfitting. In this stage, only the additional layers are trained, while the other weights of the model remain frozen. The new set of layers consists of a flatten layer at the top, which transforms the data from the previous layer into a one‐dimensional data vector. The classification part consists of two dense layers, followed by a batch normalization layer. We introduced the batch normalization layer between the two dense layers because it reduces the need for dropouts, resulting in faster training and higher accuracy. The first dense layer consists of 128 neurons with RELU activation function and the final output is produced by a dense layer with three neurons that uses SoftMax activation function for multi‐class classification. This provides the respective probabilities of COVID‐19, normal and viral pneumonia. Since the data in previous work was insufficient, this motivated us to further research and develop a more robust multi‐classification deep CNN model based on the transfer learning method by increasing the amount of data to identify patients infected with COVID‐19 to help healthcare professionals. The main contributions of our proposed research are: A novel and robust deep CNN model has been developed utilizing state‐of‐the‐art deep learning architectures for multi‐classification of COVID‐19, viral pneumonia, and normal patients. Six popular deep learning architectures, namely VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, and EfficientNetB0 are trained by deploying transfer‐learning technique on publicly available COVID‐19 chest X‐ray dataset. Data augmentation is applied to reduce overfitting issues, increase the size of the data set, and improve the performance of the model for classifying COVID‐19 cases. We conduct a comprehensive analysis to assess the effectiveness of the proposed multi‐class classification model using a variety of performance evaluation metrics, including accuracy, precision, sensitivity, F1‐score, Matthew's Correlation Coefficient (MCC), and confusion matrix. Our experimental results showed that the proposed CNN model provides better classification accuracy compared to existing modern approaches. The proposed model is an end‐to‐end structure that eliminates the need for manual feature extraction and can reduce the workload of radiologists and identify COVID‐19 more efficiently. This article is structured as follows: Section 1 describes an introduction, the related work is described in Section 2. Section 3 details the datasets used, X‐ray image pre‐processing, the classification model used in this article, and model hyperparameters. Experimental results and discussion are presented in Section 4. Finally, we conclude our research, and areas for future research are discussed in detail in Section 5.

RELATED WORK

Many researchers are doing their best to use artificial intelligence technology to develop new systems that support the diagnosis of COVID‐19. Most countries use chest X‐rays and computed tomography (CT) images because they are relatively easy to obtain from patients and are inexpensive. In a variety of different image classification tasks, deep learning has been effectively applied in the medical field, and has significant performance compared with human‐level performance. Several deep learning‐based medical imaging systems have also been adopted to assist clinicians in classifying patients with COVID‐19 infection. For example, Ozturk et al. in 2020 proposed an automatic COVID‐19 detection system based on the DarkNet model to perform binary and multi‐class classification tasks. This achieves a classification accuracy of 98.08%. Luz et al. in 2021 studied the deep learning architecture used to identify COVID‐19. They used the COVID‐19 chest X‐ray dataset to prove the efficiency of the proposed model. The EfficientNet model achieved an accuracy of 93.9%. In Reference (Verma et al., 2020), the author used the InceptionV3 model to extract features from the dataset and then used an artificial neural network to classify the images. This model was able to classify the disease in a qualified way and achieved a very high accuracy of 99.01%. In Reference (Shelke et al., 2021), the author proposed a classification model for COVID‐19 classification, which is analysed by VGG16 with an accuracy rate of 95.9%. In Reference (Apostolopoulos & Mpesiana, 2020), the author uses MobileNetV2 and VGG19 together for different data sets, and MobileNetV2 obtains 97.40% accuracy rate. Nayak et al. in 2021 proposes an automated approach supported by deep learning based on X‐ray images. They evaluated the effectiveness of eight different pre‐trained CNN models for COVID‐19 classification from chest X‐ray. The best performance was obtained with 98.33% accuracy by ResNet‐34. Jain et al. in 2020 implemented the ResNet‐101 model for COVID‐19 classification and achieved 97.78% accuracy. Wang, Lin, & Wong in 2020 proposed a COVIDNet model for detecting COVID‐19 using chest X‐rays. The model was trained to classify chest X‐rays into COVID‐19, normal and pneumonia classes. They used 16,756 images to test the COVIDNET model and achieved a classification accuracy of 92.4%. Sethy and Behra in 2020 used the ResNet 50 model and SVM classifier to classify COVID‐19 and Normal, achieving 95.38% accuracy. The author of Reference (Hammoudi et al., 2021) leveraged the standard version of DenseNet‐169 to achieve 95.72% accuracy. A pre‐trained VGG‐16 model using data augmentation technology to classify COVID‐19 gives a 95% accuracy rate. Narin et al. in (2021) studied five pre‐trained deep learning architectures to detect coronavirus infections in chest X‐ray images. They achieved 99.5% accuracy with the ResNet50 model for COVID‐19 chest X‐ray classification. In another work, Ouchicha et al. in 2020 proposed a CVDNet model to classify COVID‐19 cases by chest radiographs. They achieved 96.69% accuracy for multi‐class chest X‐ray classification. Makris et al. in 2020 used several pre‐trained CNN models and compared their performance in a three‐class classification of chest X‐ray images. The VGG16 was found in their study to perform best with an overall accuracy of 95.88%. Farooq and Hafeez in 2020 proposed a COVID‐ResNet model. They used a pretrained ResNet30 and achieved an overall accuracy rate of 96.23% for three‐class classification of chest X‐rays. Gupta et al. in 2021 proposed InstaCovNet‐19 with an integrated stacked deep convolutional network, which utilizes five different pre‐trained architectures. The proposed model achieves an accuracy of 99.53% in the binary classification and 99.08% in the three‐category classification. Das et al. in 2020 proposed an Xception model for detecting COVID‐19 patients. The model has been tested on the publicly available chest ray data set, with an accuracy rate of 97.40%. Xu et al. (2020) proposed ResNet+Location Attention for three types of classification, using CT images to achieve an accuracy of 86.7%. Khan et al. in 2020 proposed the Xception‐based Coronet model, which has been used for multi‐class classification using X‐ray images. They achieved an accuracy of 95% for the three‐level classification. Joshi et al. in 2020 provides an in‐depth analysis of COVID‐19 transmission trends. The artificial intelligence‐based tool can identify COVID‐19 faster than traditional medical data reporting systems by processing epidemiological data. In Reference (Ahuja et al., 2021), the authors used 349 COVID‐19 images and 397 normal images and applied data augmentation techniques to increase the size of the dataset. They used transfer‐learning technique for binary classification and achieved 99.4% accuracy using the ResNet18 model. Singh et al. (2021) fine‐tuned the VGG16 model to extract features from the images and performed automatic detection of COVID‐19 using four different classifiers. By using Bagging Ensemble with SVM, a detection rate of 95.7% was achieved in 385 ms. AlZu'bi, Makki, et al. (2021) explains the economic impact of the Covid‐19 pandemic in Jordan and proposes a mechanism to address the economic impact of the vaccine's spread across the country. AlZu'bi, Jararweh, et al. (2019) proposes a method based on multiresolution analysis to segment medical volumes under various conditions to facilitate the work of radiologists. It can be seen from the above research that the use of deep learning technology to identify the new coronavirus on radiological images may reduce the pressure on radiologists. However, it is still not clear which model will give the best results, as different researchers use different methods to classify COVID‐19. From the explanation given above, it is obvious that most of the models proposed so far do not provide classification performance and indicate a limitation related to the number of samples to conduct experiments. Most of the methods have overfitting problems and there is no evidence that standard evaluation metrics, such as MCC, are used in their methods. In addition, most models use a small number of samples for training, and in most cases, the data is unbalanced, so they may lack robustness. This led us to develop a deep learning model that can be used to reduce the workload of clinicians and help provide them with a second opinion.

MATERIALS AND METHODS

This section describes the proposed approach for classifying COVID‐19 diseases based on the Deep CNN architecture. The purpose of our research is to use six different models to propose a deep learning model for COVID‐19 classification. Figure 1 shows the workflow of our proposed method from beginning to end. The figure clearly shows that the model consists of three main stages: data preparation and preprocessing, deep learning models for classification, and hyperparameter tuning. The proposed model uses chest X‐rays as input, and the final output is to classify the input image into one of three categories: COVID‐19, Viral Pneumonia, and Normal.
FIGURE 1

Overall workflow of the proposed approach for three‐class problem.

Overall workflow of the proposed approach for three‐class problem.

Transfer learning models for classification

Nowadays, AI research based on deep learning provides the most advanced solutions for computer vision. CNN is a deep learning technique that has proven to achieve excellent results in a wide range of image classification tasks. However, the availability of COVID‐19 X‐ray images is limited, it is difficult to train these models from scratch to predict COVID‐19. In the field of medical imaging, there is a dearth of labelled data, and this is a major challenge when building a high‐performance deep learning system. Therefore, these challenges were resolved through a pre‐trained CNN network using the concept of transfer learning (TL). In TL, the weights of a particular model, pre‐trained on some dataset, such as ImageNet, are used to solve a related problem with a relatively smaller dataset. Moreover, a pre‐trained network has been found to perform better than a network trained from scratch. We have seen that CNN with TL plays an interesting role in classification, and it is faster and more efficient than developing classification solutions without TL. Thus, we have adopted six pre‐trained CNN models based on the TL concept. The complete TL procedure is shown in Figure 2. The main motivation behind the development of our proposed CNN model is the automatic detection of COVID‐19 patients with maximum efficiency and faster detection time. Here, in this work, VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, EfficientNetB0 have been used as the base model, pre‐trained on the ImageNet dataset for classification. These models are trained using a transfer learning approach to develop CNN model. Each model consists of two parts: a feature extractor and a classifier. The feature extractor consists of a convolutional base layers to extract features from the images, while the classifier serves to classify the extracted features. We retain the convolutional base layers and customize the final classification layer by adding new sets of layers such as multiple dense layers and batch normalization layers to alleviate the gradient disappearance problem, reduce the need for dropout, produce faster training, enhance learning, improve accuracy, and reduce overfitting. In this stage, only the additional layers are trained, while the other weights of the models remain frozen. The new set of layers consists of a flatten layer at the top, which transforms the data from the previous layer into a one‐dimensional data vector. The classification part consists of two dense layers, followed by a batch normalization layer. We introduced the batch normalization layer between the two dense layers because it reduces the need for dropouts, resulting in faster training and higher accuracy. The first dense layer consists of 128 neurons with RELU activation function and the final output is produced by a dense layer with three neurons that uses SoftMax activation function for multi‐class classification. This provides the respective probabilities of COVID‐19, normal and viral pneumonia. A detailed explanation of the models and how they are used for classification is mentioned in the next section.
FIGURE 2

The architecture of our transfer learning models from the classification of COVID‐19.

The architecture of our transfer learning models from the classification of COVID‐19.

VGG16

VGG architecture was developed by a team of Oxford University. VGG16 (Simonyan & Zisserman, 2014) is trained on ImageNet database which contains more than 1 million images. Since the VGG‐16 network has undergone extensive training, it can provide excellent accuracy even if the image data set is small. VGG16 has given the name because of the 16 layers in its architecture. Apart from having 16 layers in its architecture, VGG16 has a 3x3 receptive field. It consists of five convolutional blocks and each block contain different convolutional layers combine with Relu activation layer and max pooling layer. In this work, the last fully connected layer is removed and replaced with the designed classifier using the flatten layer, dense layer and batch normalization layer as shown in Figure 3.
FIGURE 3

VGG16 architecture designed for multiclass classification.

VGG16 architecture designed for multiclass classification.

DenseNet201

DenseNet (Huang et al., 2017) is known for its outstanding performance on four object recognition benchmark tasks such as CIFAR‐100, CIFAR‐10, ImageNet and SVHN. To maximize the flow of information between the various layers in the network, the architecture of DenseNet uses a feed forward connection to connect different layers. DenseNet has different benefits like alleviating the vanishing gradient problem, encouraging feature reuse, and dramatically decreasing the number of parameters. DenseNet201 has 201 layers in its architecture that is why it is called DenseNet201. This model can achieve high performance using less memory and low computational cost. In this work, the last fully connected layer is removed and replaced with the designed classifier using the flatten layer, dense layer and batch normalization layer as shown in Figure 4.
FIGURE 4

DenseNet201 architecture designed for multiclass classification.

DenseNet201 architecture designed for multiclass classification.

MobileNetV2

MobileNetV2 (Sandler et al., 2018) architecture has been released in early 2018. MobileNetV2 is based on some ideas from MobileNetV1 and optimized with new ideas. Architecturally, MobileNetV2 adds two new modules to the architecture. First, is introduction of linear bottlenecks between layers and second is fast connections between bottlenecks. The core idea of MobileNetV2 is that bottlenecks encode intermediate inputs and outputs of the model, and inner layers are used to encapsulate the model from low‐level concepts (e.g. pixels, etc.) to high‐level descriptors (e.g. image categories etc.). Finally, as with traditional residual connections, shortcuts allow for faster training and higher accuracy. In this work, the last fully connected layer is removed and replaced with the designed classifier using the flatten layer, dense layer and batch normalization layer as shown in Figure 5.
FIGURE 5

MobileNetV2 architecture designed for multiclass classification.

MobileNetV2 architecture designed for multiclass classification.

ResNet50

This network is very deep. We can use a standard network component called a residual module to form a more complex network (which can be called a network within a network) and train it using the standard stochastic gradient descent method. The ResNet (He et al., 2016) architecture has become a pioneering work that demonstrates that extremely deep networks can be trained with standard SGD by using the residual module. By using identity‐mapping technique, accuracy can be achieved by updating the residual module. This architecture is deep because it uses global average pooling layer instead of fully connected layer and hence makes the size small of this architecture. It is called ResNet50 because it has 50 layers in its architecture. In this work, the last fully connected layer is removed and replaced with the designed classifier using the flatten layer, dense layer and batch normalization layer as shown in Figure 6.
FIGURE 6

ResNet50 architecture designed for multiclass classification.

ResNet50 architecture designed for multiclass classification.

Xception

The Xception network is an extension of the Inception network. Xception (François, 2017) uses a new concept called depth‐separable convolutional operations that can outperform Inception‐v3 on a large image classification dataset with 350 million images and 17,000 classes. The Xception architecture has the same number of parameters as Inception‐v3, and the performance gains are not due to increased capacity, but to more efficient use of model parameters. Xception has the smallest weight, which is only 91 MB. In this work, the last fully connected layer is removed and replaced with the designed classifier using the flatten layer, dense layer and batch normalization layer as shown in Figure 7.
FIGURE 7

Xception architecture designed for multiclass classification.

Xception architecture designed for multiclass classification.

EffieienceNetB0

Researchers at Google proposed a new model scaling method for EfficientNet (Tan & Le, 2019) base network model that uses simple and efficient composite coefficients to weigh network depth, width, and input image resolution. A series of Efficient Net models were obtained by scaling up the Efficient Net base model. This series of models beats all previous convolutional neural network models in terms of efficiency and accuracy. The core structure of EfficeintNetB0 is the mobile inverted bottleneck convolution (MBConv) module, which also introduces the attention mechanism of Squeeze‐and‐Excitation Network (SENet). In this work, the last fully connected layer is removed and replaced with the designed classifier using the flatten layer, dense layer and batch normalization layer as shown in Figure 8.
FIGURE 8

EfficientNetB0 architecture designed for multiclass classification.

EfficientNetB0 architecture designed for multiclass classification.

EXPERIMENTAL SETUP AND RESULTS

This section describes the experimental setup, data sets used, data preprocessing and enhancement, details of model implementation, performance metrics, and then analyzes the performance of the proposed model. Finally, the performance of the proposed model is compared with the state‐of‐the‐art methods.

COVID‐19 chest X‐ray dataset

This study uses an open access database of chest X‐ray images from the Kaggle repository (Chowdhury et al., 2020). A team of researchers from the University of Qatar and Dhaka University collaborated with doctors from Pakistan and Malaysia to create a chest X‐ray image dataset and has been updated regularly. They have also added COVID‐19 chest X‐ray images from the Italian Society of Medical and Interventional Radiology (SIRM) database. The dataset is divided into three different classes: COVID, normal, and viral pneumonia. There are 1200 positive images of Covid‐19, 1345 images of viral pneumonia, and 1341 normal images ‐ a sample images belonging to all three classes depicted in Figure 9. Table 1 shows the number of X‐ray images in each class.
FIGURE 9

Sample of chest X‐ray from the dataset. The first row represents COVID‐19 images, the second row represents normal images, and the third row represents viral pneumonia images.

TABLE 1

Chest X‐ray image dataset belonging to each class.

ClassNumber of images
COVID‐191200
Viral pneumonia1345
Normal1341
Total3886
Sample of chest X‐ray from the dataset. The first row represents COVID‐19 images, the second row represents normal images, and the third row represents viral pneumonia images. Chest X‐ray image dataset belonging to each class.

Image preprocessing

Image preprocessing is one of the important prerequisites for developing a more efficient detection system. Due to differences in the dimensions of the images in the dataset, all images in the dataset have been resized to 224 × 224 pixels for input to various architectures such as VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, and EfficientNetB0. In addition, we split the data set into training images and test images. Seventy‐five percent of the total images are reserved for training purposes, and 25% of the images are reserved for testing purposes.

Data augmentation

The deep learning model we used in this research requires a lot of data to be effectively trained. However, the size of the input data is not large enough. Overfitting can happen on small datasets. Overfitting is a situation where the model is unable to identify unknown patterns. In this situation, a data augmentation technique was applied to synthetically increase the size of the data over the original data, and also help reduce overfitting during CNN training. In this article, the following techniques are used to augment the image: (1) Horizontal flipping is applied with rotation range 20, (2) The rotation is done by rotating the image by 15° and (3) Scaling. It is worth noting that three augmentation strategies are used in order to generate a new training set. Initially, 2915 images were allocated to train the models for the multiclass classification task, resulting in 11,660 training images after the data augmentation. Which is four times larger than the original training images.

Models implementation

In this study, all deep learning models‐VGG16, DenseNet201, MobileNetV2, ResNet50, Xception, EfficientNetB0 are implemented using the TensorFlow package, which provides a Python application programming interface (API). We also used Keras API as the official front end of TensorFlow for the development and implementation of CNN algorithms. In addition, all experiments were performed on Google Colab with 12 GB RAM. These models run on a 12 GB NVIDIA Tesla K80 GPU. We use the Adam optimizer to train and optimize the model, and the categorical cross‐entropy loss function is used for 3‐class classification. An epoch is a cycle of updating network weights using the entire training data. As the number of epochs increases, the performance of the model will improve over time. All models were run over 20 epochs with a learning rate of 0.001 and a batch size of 64. To overcome the problem of overfitting, we implemented an early stop technique based on validation performance. The parameters used to train the model are shown in Table 2. The achieved test accuracy is also shown in Table 2.
TABLE 2

Parameters used to train the models.

Performance measuresVGG16DenseNet201MobileNetV2ResNet50XceptionEfficientNetB0
Batch Size646464646464
Layers16201535071237
OptimizerAdamAdamAdamAdamAdamAdam
Parameters (M)13820.23.425.622.95.3
Activation functionSoftmaxSoftmaxSoftmaxSoftmaxSoftmaxSoftmax
Learning rate0.0010.0010.0010.0010.0010.001
Custom input size224 × 224224 × 224224 × 224224 × 224224 × 224224 × 224
Testing accuracy0.9780.9570.9420.9740.9370.959
Parameters used to train the models.

Performance evaluation metrics

In this section, we mentioned different evaluation metrics used to evaluate the effectiveness of the proposed CNN model. The models were tested on 300 COVID‐19 images, 336 normal images, and 337 viral pneumonia images. The performance of each model is evaluated based on different metrics, such as accuracy, sensitivity, precision, F1‐score, and Matthews Correlation Coefficient (MCC). The confusion matrix was chosen to obtain the values of the performance metrics. The confusion matrix is shown in Figure 10. These metrics were computed using various parameters of the confusion matrix, such as TP, FP, TN, and FN, representing true positive, false positive, true negative, and false negative. TP indicates the number of positive patients correctly predicted, FN indicates infected patients incorrectly identified as normal or pneumonia cases, FP indicates normal or pneumonia patients predicted as COVID‐19, and TN indicates healthy or pneumonia patients correctly predicted. Different performance evaluation metrics are defined as follows:
FIGURE 10

Confusion matrix.

Confusion matrix.

Performance analysis

This section presents detailed experiments and results of automatic detection and classification of COVID‐19. Extensive experiments were conducted using several state‐of‐the‐art models such as VGG16, DenseNet201, MobileNetV2, ResNet50, Xception and EfficientNetB0 to develop a robust classifier. The main goal of this study is to examine the success of the deep learning architectures and compare it with the performance of other CNN models in the literature. The six architectures used in the study were evaluated based on precision, sensitivity, and F1‐score performance indicators. Table 3 shows the class‐wise performance of the models. The different classes used in this study are COVID‐19, viral pneumonia, and normal. Table 3 shows the performance metrics and macro average scores obtained for each model. From the results table, we can see that the classifier works well for all classes. Based on these results, we consider the VGG16 to be our highest performing model. It exhibits the highest sensitivity, accuracy and F1‐score for the COVID‐19 class, while showing the highest F1‐score for the viral pneumonia class. More specifically, our proposed VGG16 model achieves the highest sensitivity, precision, and F1‐score of 0.99, 0.99 and 0.99 for COVID‐19 classification, respectively. In addition, the model achieved the best overall F1‐score of 0.97 in the normal and viral pneumonia class. On the other hand, Xception model provided relatively low sensitivity, precision, and F1‐score values for all classes.
TABLE 3

Class‐wise sensitivity, F1‐score and precision for all the models.

Model and classPrecisionSensitivity F1‐scoreAccuracy
VGG16
COVID‐190.990.990.99
Viral pneumonia0.970.970.97
Normal0.980.970.97
Macro average0.980.980.980.978
DenseNet201
COVID‐190.990.950.97
Viral pneumonia0.950.940.94
Normal0.940.980.96
Macro average0.960.960.960.957
MobileNetV2
COVID‐191.000.930.96
Viral pneumonia0.890.960.92
Normal0.950.940.95
Macro average0.950.940.940.942
ResNet50
COVID‐191.000.990.99
Viral pneumonia0.950.970.96
Normal0.970.960.97
Macro average0.980.970.980.974
Xception
COVID‐191.000.940.97
Viral pneumonia0.890.940.92
Normal0.940.930.93
Macro average0.940.940.940.937
EfficientNetB0
COVID‐190.990.980.98
Viral pneumonia0.980.910.94
Normal0.920.990.96
Macro average0.960.960.960.959
Class‐wise sensitivity, F1‐score and precision for all the models. Table 4 shows the comparison of the detailed classification results of the proposed models in terms of different metrics. It can be seen that the proposed VGG16 model achieves the highest performance, with a precision of 0.979, a sensitivity of 0.978, an F1‐score of 0.978, an accuracy of 0.978, and an MCC of 0.967. The ResNet50 network was found to be the second‐best performing network in COVID‐19 prediction, achieving 0.975 precision, 0.974 sensitivity, 0.975 F1‐score, 0.974 accuracy, and 0.961 MCC. The proposed VGG16 models outperforms the other models in almost every performance metric, including accuracy, sensitivity and accuracy. After intensive experiments, considering the performance of different models, we chose VGG16 as our preferred model.
TABLE 4

Comparison of classification results of different models.

ModelsAccuracyPrecisionSensitivity F1‐scoreMCC
VGG160.9780.9790.9780.9780.967
DenseNet2010.9570.9590.9570.9580.936
MobileNetV20.9420.9460.9410.9430.914
ResNet500.9740.9750.9740.9750.961
Xception0.9370.9410.9370.9380.906
EfficientNetB00.9590.9620.9600.9600.940
Comparison of classification results of different models. Figure 11 shows the comparison of the model in terms of different performance indicators such as accuracy, precision, sensitivity, F1‐score, and Matthew Correlation Coefficient (MCC). It can be clearly seen from the figure that the proposed VGG16 model outperforms other models in terms of accuracy, sensitivity, F1‐score and precision.
FIGURE 11

Accuracy, precision, F1‐score, Matthew's correlation coefficient (MCC) and sensitivity for the proposed models.

Accuracy, precision, F1‐score, Matthew's correlation coefficient (MCC) and sensitivity for the proposed models. The confusion matrix is the main tool for evaluating errors in classification problems. It shows the number of images correctly and incorrectly recognized by the model. We constructed the confusion matrix of the six architectures proposed in the study to evaluate the performance, as shown in Figure 12. By observing the confusion matrix, the model performs well on the test set. The given model can be used to detect the presence of the virus in the human body in real time. From the matrix, we can conclude that out of 300 images, our proposed VGG16 model misclassifies only two COVID‐19 images as viral pneumonia. The proposed model did not misclassify normal images as COVID‐19, although nine normal images were misclassified as viral pneumonia. Among the 337 images, only two images of viral pneumonia were misclassified as COVID‐19, and eight images were misclassified as normal. It can be noticed that the model has no confusion between COVID‐19 and the other two image classes. However, the high accuracy, sensitivity and F1‐score show that the proposed model does an excellent job of classifying most images. This shows that this deep learning technique is very reliable in distinguishing COVID‐19 images from other X‐ray images.
FIGURE 12

Confusion matrices for all deep CNN models (a) proposed VGG16 model, (b) DenseNet201, (c) MobileNetV2, (d) ResNet50, (e) Xception, (f) EfficientNetB0.

Confusion matrices for all deep CNN models (a) proposed VGG16 model, (b) DenseNet201, (c) MobileNetV2, (d) ResNet50, (e) Xception, (f) EfficientNetB0. In addition to testing the classification performance of the models, comparisons in terms of computational cost were also performed. Table 5 shows the computational cost of the model in terms of total training and testing time, time per epoch (in seconds). The training phase of VGG16 takes 952 s and MobileNetV2 takes 808 s, which is the fastest models. The training phase of the Xception model takes 2389 s, which is the slowest model. It can be seen that the proposed model based on VGG16 is computationally less expensive than other CNN architectures. Furthermore, the results show that the proposed model can find the best accuracy in a reasonable time.
TABLE 5

Comparison of computation time.

ModelsTraining time (s)Testing time (s)Time per epoch
VGG16952.2017.3147–48
DenseNet2011381.4722.3169–71
MobileNetV2808.9916.3640–41
ResNet501010.6417.8050–51
Xception2389.7226.97119–120
EfficientNetB01169.2416.6458–59
Comparison of computation time.

Performance comparison with and without data augmentation

Data augmentation is a popular technique for eliminating overfitting problems and improving the generalization of models. Therefore, to evaluate the strength and validity of the proposed model, we conducted experiments with and without data augmentation. Table 6 illustrates the experiments related to the proposed CNN model based on the VGG‐16 model, both with and without data enhancement. Our model achieved an accuracy of 0.978 with augmentation and 0.966 without augmentation. The proposed model records 0.978 sensitivity, 0.979 precision, 0.978 F1‐score, and 0.967 MCC. The result of the experiment shows that incorporating data augmentation into the training of the proposed model performs significantly better and gives more consistent results.
TABLE 6

Performance comparison with and without data augmentation.

TechniqueAccuracyPrecisionSensitivity F1‐scoreMCC
With augmentation0.9780.9790.9780.9780.967
Without augmentation0.9660.9680.9660.9670.949
Performance comparison with and without data augmentation.

Result comparison with different optimization techniques

In this experimental setup, we chose various optimizers such as Adam, SGD, RMSProp and AdaDelta to evaluate the effectiveness of the proposed method. Initially, the Adam optimizer was used in the training phase, but we chose three other optimizers to further evaluate the strength of the proposed model. Table 7 shows the detailed results of the proposed model using different optimization algorithms. As shown in Table 7, the ADAM optimizer achieved the best classification performance than the other optimizers. Using the ADAM optimizer, the classification accuracy of the chest X‐ray images was 0.978. Then, using SGD, RMSProp, and AdaDelta, the classification accuracy of the images was 0.951, 0.955 and 0.952, respectively. From these results, it can be seen that the proposed model adapts well to all four optimizers and exhibits fast learning.
TABLE 7

Comparison of performance among different optimizers.

OptimizersAccuracyPrecisionSensitivity F1‐scoreMCC
Adam0.9780.9790.9780.9780.967
SGD0.9510.9530.9520.9520.927
RMSProp0.9550.9580.9560.9560.933
AdaDelta0.9520.9550.9520.9530.929
Comparison of performance among different optimizers.

Result comparison with different batch sizes

In this section, different batch size experiments are performed to show the effect of batch size on the proposed model, since batch size is considered as one of the most critical hyper‐parameter. During training, different batch sizes such as 16, 32, 64 and 128 were applied. Table 8 shows the performance of the proposed model in terms of accuracy, precision, sensitivity, F1‐score and MCC when trained using different batch sizes such as 16, 32, 64 and 128. Table 8 shows that a higher and more consistent test accuracy of 0.978 is achieved with batch size 64, while 0.979, 0.978, 0.978 and 0.967 are achieved in terms of precision, sensitivity, F1‐score, and MCC, respectively.
TABLE 8

Performance comparison of proposed model with different batch sizes.

Batch sizeAccuracyPrecisionSensitivity F1‐scoreMCC
160.9540.9580.9550.9560.932
320.9700.9710.9700.9700.955
64 0.978 0.979 0.978 0.978 0.967
1280.9610.9640.9610.9620.943

Note: Bold indicates the best values.

Performance comparison of proposed model with different batch sizes. Note: Bold indicates the best values.

Discussion on results

Currently, the use of machine learning, especially deep neural networks, has attracted great attention in COVID‐19 diagnosis, and several researchers have used different approaches to develop reliable diagnostic systems. However, most of them evaluate their methods on limited datasets. The concept of transfer learning has been widely used because it offers several beneficial features such as high accuracy, speed and ease of application. In this study, we propose a model based on deep transfer learning to detect COVID‐19 from chest X‐ray images. In this work, a total of 3886 chest radiographs (1200 COVID‐19, 1341 normal, and 1345 pneumonia) obtained from publicly available dataset have been used to train the CNN models. Data augmentation was performed to increase the training samples in order to generalize the models and prevent overfitting. We comprehensively evaluate the effectiveness of six most effective CNN models including VGG16, ResNet50, DenseNet201, MobileNetV2, Xception and EfficientNetB0 for COVID‐19 prediction. The architecture of all CNN models is enhanced by using multiple dense layers and batch normalization layer to alleviate the problem of gradient vanishing, reduce the need for dropout, produce faster training, higher accuracy, and reduce overfitting. Extensive experiments were performed on a large number of images for automated COVID‐19 screening. The experimental results and overall comparison of the models are shown in Tables 3 and 4. The results demonstrate the superiority of our proposed CNN model based on VGG16, achieving the highest overall performance in multi‐class classification of chest X‐ray images. It can be seen that the proposed model achieved the highest performance with a precision of 0.979, a sensitivity of 0.978, an F1‐score of 0.978, an accuracy of 0.978, and an MCC of 0.967. In addition, the proposed model achieved the highest sensitivity of 0.99 for the COVID‐19 class. A higher sensitivity value will minimize missed cases of COVID‐19, which is the main goal of this study. The ResNet50 network was found to be the second‐best performing network in COVID‐19 prediction, achieving 0.975 precision, 0.974 sensitivity, 0.975 F1‐score, 0.974 accuracy, and 0.961 MCC. The Xception model show the lowest performance on the dataset. As described in Table 5, the training time of our proposed model is 952 s, which indicates that our model has a shorter training time than the other models. We can also note from Table 5 that the overall time taken by our proposed method to achieve an accuracy of 0.978 is much less. Thus, this indicates that our proposed method is not computationally complex and is a better solution that helps to reduce the spread of COVID‐19 infections. From the results, we can say that this proposed model is more reliable and can be deployed in the medical field to assist radiologists in identifying COVID‐19 patients. Although the current study has limitations in terms of data availability, the data used in this study came from one or two sources. However, in the future, we plan to use images from different sources to validate our proposed model.

Comparison with the state‐of‐the‐art methods

A comparative analysis was carried out to prove the effectiveness of the proposed CNN model for COVID‐19 classification. We compared the results of our proposed system with the recently proposed DL methods for the automatic diagnosis of COVID‐19 using chest X‐rays in Table 9. As shown in Table 9, there are some high‐quality studies that have developed DL models to detect COVID‐19. However, the main issue that should be noted is that most of them use a limited number of COVID‐19 case data. Keeping all these in mind, CNN models have been developed for COVID‐19 detection based on various pre‐trained architectures. In this work, a total of 3886 chest X‐rays obtained from a publicly available database were used to train and test the CNN models. Our proposed CNN model based on the pre‐trained VGG16 architecture achieves the highest accuracy rate of 0.978 in the multi‐class classification of COVID‐19, normal and viral pneumonia. It can be seen that the proposed CNN model provides better performance than other existing methods.
TABLE 9

Accuracy comparison between the proposed CNN model and other existing state‐of‐the‐art methods.

AuthorMethodDatasetClassAccuracy
Wang et al. (2020)COVID‐Net

53 COVID‐19

5526 Non‐COVID‐19

and 8066 Normal

30.933
Apostolopoulos & Mpesiana (2020)VGG19

224 COVID‐19

700 Pneumonia

504 Healthy

30.934
National Health Commission of People's Republic of China (2020) (He, 2020)ResNet50 + SVM

25 COVID‐19

25 Normal

20.953
Loey et al. (2020)AlexNet

79 COVID‐19

69 Normal

79 Pneumonia

30.851
Ozturk et al. (2020)DarkNet

127 COVID‐19

500 Normal

500 Pneumonia

30.870
Keles et al. (2021)COV19‐ResNet

210 COVID‐19

350 Normal

350 Pneumonia

30.976
Oh et al. (2020)Patch‐Based CNN

8851 Normal

6012 Pneumonia

180 COVID‐19

30.889
Elzeki et al. (2021)CXRVN

221 COVID‐19

148 Pneumonia

234 Normal

30.930
El Asnaoui & Chawki (2020)

VGG‐16

Inception_Resnet_V2

2780 bacterial pneumonia

231 Covid19

1583 normal

3

0.921

0.748

Khan et al. (2020)CoroNet Xception

310 Normal

327 Viral pneumonia

284 COVID‐19

30.896
Jain et al. (2020)ResNet101

440 COVID‐19

480 Viral pneumonia

20.977
Nigam, et al. (2021)VGG16, Xception, EfficientNet

795 COVID‐19

795 Normal

711 Others

30.790, 0.880, 0.934
Alhudhaif et al. (2021)DenseNet201

368 COVID‐19

850 Other Pneumonia

20.949
Gilanie et al. (2021)CNN

7021 Pneumonia Images

7021 Normal Images

1066 Covid‐19 Images

30.966
Fayemiwo et al. (2021)VGG16

1300 Pneumonia

1300 COVID‐19

1300 Normal

30.938
Gaur et al. (2021)

VGG16

EfficientNetB0

1345 Viral Pneumonia

420 COVID‐19

1341 Normal

30.878
Proposed method

VGG16

ResNet50

1200 COVID‐19

1341 Normal

1345 Viral Pneumonia

3

0.978

0.974

Accuracy comparison between the proposed CNN model and other existing state‐of‐the‐art methods. 53 COVID‐19 5526 Non‐COVID‐19 and 8066 Normal 224 COVID‐19 700 Pneumonia 504 Healthy 25 COVID‐19 25 Normal 79 COVID‐19 69 Normal 79 Pneumonia 127 COVID‐19 500 Normal 500 Pneumonia 210 COVID‐19 350 Normal 350 Pneumonia 8851 Normal 6012 Pneumonia 180 COVID‐19 221 COVID‐19 148 Pneumonia 234 Normal VGG‐16 Inception_Resnet_V2 2780 bacterial pneumonia 231 Covid19 1583 normal 0.921 0.748 310 Normal 327 Viral pneumonia 284 COVID‐19 440 COVID‐19 480 Viral pneumonia 795 COVID‐19 795 Normal 711 Others 368 COVID‐19 850 Other Pneumonia 7021 Pneumonia Images 7021 Normal Images 1066 Covid‐19 Images 1300 Pneumonia 1300 COVID‐19 1300 Normal VGG16 EfficientNetB0 1345 Viral Pneumonia 420 COVID‐19 1341 Normal VGG16 ResNet50 1200 COVID‐19 1341 Normal 1345 Viral Pneumonia 0.978 0.974

CONCLUSION AND FUTURE WORK

The COVID‐19 pandemic clearly poses a threat to human survival. Early diagnosis of COVID‐19 infection is essential to prevent the disease from spreading to others. Several researchers from all over the world are working hard to deal with the COVID‐19 pandemic. During this emergency, it is important that no positive case goes unrecognized. Deep learning techniques have proven to be an indispensable tool for more accurate analysis of big data. This study demonstrates the usefulness of modern deep learning models using a transfer learning approach to automatically detect COVID‐19 from chest X‐ray images. This study utilizes open source chest X‐ray images of normal, viral pneumonia and COVID‐19 cases. The curated chest X‐ray dataset consists of 1200 COVID‐19 images, 1341 normal images, and 1345 viral pneumonia images. In this work, six popular and best‐performing CNN‐based deep learning models were tested to detect COVID‐19, viral pneumonia, and normal patients from chest X‐rays. Through extensive experiments and results performed on the collected datasets, the results showed that the proposed CNN model based on the VGG16 model outperformed all other models. The proposed CNN model built on VGG16 achieved 97.84% accuracy, 97.89% sensitivity, 97.90% precision, 97.89% F1‐score, and 96.75% of MCC. Chest radiographs of COVID‐19 and viral pneumonia contain similar features that are difficult for radiologists, so defining a COVID‐19 detection problem can provide better control over model development. According to our research results, it is believed that due to its higher performance, this study has the potential to reduce the pressure on radiologists caused by the increasing number of COVID‐19 patients to make decisions in clinical practice. In the future, we will use other deep learning techniques, such as GAN, and large number of X‐ray images.

CONFLICT OF INTEREST

The authors declare no conflict of interest.
  34 in total

Review 1.  Deep learning.

Authors:  Yann LeCun; Yoshua Bengio; Geoffrey Hinton
Journal:  Nature       Date:  2015-05-28       Impact factor: 49.962

2.  Automated Deep Transfer Learning-Based Approach for Detection of COVID-19 Infection in Chest X-rays.

Authors:  N Narayan Das; N Kumar; M Kaur; V Kumar; D Singh
Journal:  Ing Rech Biomed       Date:  2020-07-03

3.  Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets.

Authors:  Yujin Oh; Sangjoon Park; Jong Chul Ye
Journal:  IEEE Trans Med Imaging       Date:  2020-05-08       Impact factor: 10.048

4.  Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks.

Authors:  Ali Narin; Ceren Kaya; Ziynet Pamuk
Journal:  Pattern Anal Appl       Date:  2021-05-09       Impact factor: 2.580

5.  Transfer learning-based ensemble support vector machine model for automated COVID-19 detection using lung computerized tomography scan data.

Authors:  Mukul Singh; Shrey Bansal; Sakshi Ahuja; Rahul Kumar Dubey; Bijaya Ketan Panigrahi; Nilanjan Dey
Journal:  Med Biol Eng Comput       Date:  2021-03-18       Impact factor: 2.602

6.  COVID-19: a new deep learning computer-aided model for classification.

Authors:  Omar M Elzeki; Mahmoud Shams; Shahenda Sarhan; Mohamed Abd Elfattah; Aboul Ella Hassanien
Journal:  PeerJ Comput Sci       Date:  2021-02-18

7.  COVID-19: Automatic detection from X-ray images by utilizing deep learning methods.

Authors:  Bhawna Nigam; Ayan Nigam; Rahul Jain; Shubham Dodia; Nidhi Arora; B Annappa
Journal:  Expert Syst Appl       Date:  2021-03-16       Impact factor: 6.954

8.  CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images.

Authors:  Asif Iqbal Khan; Junaid Latief Shah; Mohammad Mudasir Bhat
Journal:  Comput Methods Programs Biomed       Date:  2020-06-05       Impact factor: 5.428

9.  Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.

Authors:  Ioannis D Apostolopoulos; Tzani A Mpesiana
Journal:  Phys Eng Sci Med       Date:  2020-04-03

10.  A deep learning approach to detect Covid-19 coronavirus with X-Ray images.

Authors:  Govardhan Jain; Deepti Mittal; Daksh Thakur; Madhup K Mittal
Journal:  Biocybern Biomed Eng       Date:  2020-09-07       Impact factor: 4.314

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.