Literature DB >> 33961635

ai-corona: Radiologist-assistant deep learning framework for COVID-19 diagnosis in chest CT scans.

Mehdi Yousefzadeh1,2,3, Parsa Esfahanian1, Seyed Mohammad Sadegh Movahed3, Saeid Gorgin1,4, Dara Rahmati1,5, Atefeh Abedini6, Seyed Alireza Nadji7, Sara Haseli6, Mehrdad Bakhshayesh Karam6, Arda Kiani6, Meisam Hoseinyazdi8, Jafar Roshandel6, Reza Lashgari2.   

Abstract

The development of medical assisting tools based on artificial intelligence advances is essential in the global fight against COVID-19 outbreak and the future of medical systems. In this study, we introduce ai-corona, a radiologist-assistant deep learning framework for COVID-19 infection diagnosis using chest CT scans. Our framework incorporates an EfficientNetB3-based feature extractor. We employed three datasets; the CC-CCII set, the MasihDaneshvari Hospital (MDH) cohort, and the MosMedData cohort. Overall, these datasets constitute 7184 scans from 5693 subjects and include the COVID-19, non-COVID abnormal (NCA), common pneumonia (CP), non-pneumonia, and Normal classes. We evaluate ai-corona on test sets from the CC-CCII set, MDH cohort, and the entirety of the MosMedData cohort, for which it gained AUC scores of 0.997, 0.989, and 0.954, respectively. Our results indicates ai-corona outperforms all the alternative models. Lastly, our framework's diagnosis capabilities were evaluated as assistant to several experts. Accordingly, We observed an increase in both speed and accuracy of expert diagnosis when incorporating ai-corona's assistance.

Entities:  

Mesh:

Substances:

Year:  2021        PMID: 33961635      PMCID: PMC8104381          DOI: 10.1371/journal.pone.0250952

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Since the beginning of 2020, the novel Coronavirus Disease 2019 (COVID-19) has widely spread globally. As of 2021/04/29 08:42:32, there are more than 118 million reported cases and 2.5 million deaths [1]. Patients infected with COVID-19 commonly display symptoms such as fever, cough, fatigue, breathing difficulties, and muscle ache [2-4]. Vaccination has started in many countries since early 2021, but has been facing many challenges [5]. Currently, the most common method of testing for COVID-19 is Real-Time Polymerase Chain Reaction (RT-PCR) to detect viral nucleotides from upper respiratory specimens obtained by nasopharyngeal, oropharyngeal, or nasal mid-turbinate swab [6]. It has been shown that RT-PCR has several drawbacks. Reports suggest that since oropharyngeal swabs tend to detect COVID-19 less frequently than nasopharyngeal swabs, RT-PCR tends to have a high false-negative rate. Furthermore, RT-PCR has demonstrated a decrease in sensitivity to below 70% due to a low viral nucleic acid load and inefficiencies in its detection. This might be caused by immature development of nucleic acid detection technology, variation in detection rate by using different gene region targets, or a low patient viral load [7]. Besides, the availability of test kits and expert personnel to take them is still suboptimal in some countries. Not to mention the extended time period for the test completion contributes to ruling out RT-PCR as a reliable early detection and screening method [8-10]. In contrast to RT-PCR, diagnosis from other measurements such as chest Computed Tomography (CT) and blood factors is shown to be an effective early detection and screening method with high sensitivity in both detection [11] and anticipation of the severity of the disease [12]. Chest CT scan of a COVID-19 infected patient reveals bilateral peripheral involvement in multiple lobes with areas of consolidation and ground-glass opacity that progresses to “crazy-paving” patterns as the disease develops [11]. Asymmetric bilateral subpleural patchy ground-glass opacities and consolidation with a peripheral or posterior distribution, mainly in middle and lower lobes, are described as the most common image finding of COVID-19 [13]. To elaborate more, additional common findings include interlobular septal thickening, air bronchogram, and crazy paving pattern in the intermediate stages of the disease [11]. The most common pattern in the advanced stage is subpleural parenchymal bands, fibrous stripes, and subpleural resolution. Nodules, cystic change, pleural effusion, pericardial effusion, lymphadenopathy, cavitation, CT halo sign, and pneumothorax are some of the uncommon but possible findings [11, 14]. Recent studies indicate that organizing pneumonia, which occurs in the course of viral infection, is pathologically responsible for the clinical and radiological manifestation of Coronavirus pneumonia [13]. Deep learning is an area of Artificial Intelligence (AI) that has demonstrated tremendous capabilities in image feature extraction and has been recognized as a successful tool in medical imaging-based diagnosis, performing exceptionally with modalities such as X-Ray, Magnetic Resonance Imaging (MRI), and CT [15-21]. Recently, the research of AI-assisted respiratory diagnosis, especially pneumonia, has gained a lot of attention. One of the well-established standards in this research is the comparison of AI with expert medical and radiology professionals. As a pioneering work in this field [22], introduced a radiologist-level deep learning framework trained and validated on the ChestX-ray8 dataset [23] for the detection of 14 abnormalities, including pneumonia, in chest X-Ray images, which was further developed to propose a deep learning framework with pneumonia detection capabilities equivalent to that of expert radiologists [24]. Moreover [25], introduced a novel dataset of chest X-Ray images annotated with 14 abnormalities (7 the same as ChestX-ray8) and a state-of-the-art deep learning framework. Lastly [26], proposed a deep learning framework with a feature extractor based on AlexNet [27] to create a model capable of accurately diagnosing knee injuries from MRI scans and further showcases the positive impact of AI assistance in expert diagnosis. In COVID-19 related research [8], has reported a sensitivity of 0.59 for RT-PCR test kit and 0.88 for CT-based diagnosis for patients with COVID-19 infection, and a radiologist sensitivity of 0.97 in diagnosing COVID-19 infected patients with RT-PCR confirmation. Furthermore [28], introduces a deep learning framework with a 0.96 AUC score in the diagnosis of RT-PCR confirmed COVID-19 infected patients. Zhang et al.[29] proposed a model that on a dataset of 4154 subjects achieved an AUC score of 0.98 for diagnosing COVID-19 from two other classes; Normal and CP (Common Pneumonia i.e. non COVID-19 viral and bacterial pneumonia). They further made their dataset, CC-CCII [29], publicly available. In addition, the model proposed by Jin et al.[30], developed on a dataset of 9025 subjects, which is an amalgamation of their own data and several other public datasets (e.g. LIDC–IDRI [31], Tianchi-Alibaba [32], MosMedData [33], and CC-CCII), gained an accuracy of 0.975 for diagnosing between COVID-19 and three other classes (non-pneumonia, non-viral community-acquired pneumonia, Influenza-A/B), 0.921 for between COVID-19 and the CP and Normal classes on the CC-CCII dataset, and 0.933 for between COVID-19 from non-pneumonia on the MosMedData cohort. Further, this work manages to astoundingly diagnose between COVID-19 and influenza type-A, which is surprising given the small amount of influenza data in their study. In this paper, we present ai-corona, a radiologist-level deep learning framework for COVID-19 diagnosis in chest CT scans. Our framework was developed on a set of 7184 lung CT scans from 5693 subjects, for which 2032 subjects are from the Masih Daneshvari Hospital (MDH) cohort and the rest belong to the CC-CCII set and MosMedData cohort. This data was gathered from three countries; China, Iran, and Russia. In this work, our framework diagnoses between COVID-19 and CP (common pneumonia), NCA (non COVID-19 abnormal), non-pneumonia, and Normal classes. We evaluate and compare the performance of ai-corona with experts and RT-PCR in COVID-19 diagnosis and further compare our framework with AI models proposed by Zhang et al. [29] and Jin et al. [30]. Finally, we examine the impact of AI as assistance to expert diagnosis. In short, the main advantages and novelties of this study are as follows: Introducing a comprehensive and authentic methodology for annotating the dataset cases for such work, especially the COVID-19 infection, for the MDH dataset. Proposing a deep learning framework that is capable of accurately diagnosing chest CT scans for COVID-19, while being robust to the number of slices in the scan and having a low computational load. Thorough evaluation of the diagnosis performance of ai-corona on multiple datasets and comparing to radiologists, RT-PRC, and two other similar works. Evaluating and elucidating the impact of ai-corona’s assistance on radiologists’ diagnosis performance.

Materials and methods

Data

Three datasets were employed in this work; The MDH cohort, the CC-CCII set, and the MosMedData cohort. An overall summery of all the data employed in our work can be found in Table 1.
Table 1

Number of subjects and (number of scans) for each class in the CC-CCII set, MDH cohort, and MosMedData cohort separated over training, tuning, and test.

DatasetClassesTrainingTuningTest#
MDHNormal467 (470)51 (51)120 (121)628 (642)
NCA576 (578)64 (64)117 (117)757 (759)
COVID-19475 (542)53 (59)109 (119)637 (720)
CC-CCIINormal670 (844)72 (94)81 (105)823 (1043)
CP665 (1131)67 (127)87 (147)828 (1405)
COVID-19734 (1231)82 (131)84 (143)900 (1505)
MosMedDataNon-Pneumonia--254 (254)254 (254)
COVID-19856 (856)856 (856)
#3587 (4796)398 (526)1708 (1862)5693 (7184)
The first dataset was obtained by our group from patients hospitalized at the Masih Daneshvari Hospital (MDH) (Tehran, Iran). The cascade structure of this cohort can be found in S1 Fig. This cohort consists of 2121 lung CT scans from 2032 subjects annotated into 3 classes: (1) Normal; (2) Non-COVID Abnormal (NCA); and (3) COVID-19. Since differentiating between COVID-19 and Normal classes is easier than between COVID-19 and NCA (especially if there are similar imaging features), having the NCA class is very important, as it includes abnormalities such as atelectasis, cardiomegaly, lung emphysematous, hydropneumothorax, pneumothorax, cardiopulmonary edema, cavity, fibrocavitary changes, fibrobronchiectatic, mass, and nodule. Using the search function of the hospital’s PACS and by reviewing reports by two board-certified radiologists, we gathered a preliminary dataset with a balanced distribution over all three classes. All the participants in the MDH cohort gave written consent and our work has received the ethical license of IR.SBMU.NRITLD.REC.1399.024 from the Iranian National Committee for Ethics in Biomedical Research. Cases in the Normal and NCA classes are from prior to the start of the Coronavirus global pandemic. A subset of the data in these two classes was randomly selected for testing. This portion was re-annotated by a different expert radiologist. Only the cases with consistent labels (i.e. same label as in the initial report) were retained in the test set. The MDH Normal and NCA cases that were not included in the test subset were further divided randomly into a training subset and a tuning subset. The MDH COVID-19 group scans for testing were taken in the early stages of the infection and included 119 lung CT scans from 109 patients hospitalized for more than three days. These scans were selected by the consensus of several metrics that indicate COVID-19 infection: (1) report by at least one radiologist on the scan; (2) confirmation of infection by two pulmonologists; (3) clinical presentation; and (4) RT-PCR report. Furthermore, unlike other works that take a positive RT-PCT as the sole criterion to annotate a case with COVID-19 label, and since our evaluation includes comparing the diagnosis performance of ai-corona with experts and RT-PCT, we clearly could not use a dataset that was annotated solely based on RT-PCR test result. Our annotation strategy is, therefore, more comprehensive and incorporates additional available metadata. The MDH COVID-19 training (1518 subjects, 1590 scans) and tuning (168 subjects, 174 scans) sets were annotated using the aforementioned reports by the two radiologists. The CT scans in the MDH cohort contained between 21 to 46 slices acquired in axial orientation with a slice thickness between 8 and 10 mm, The histogram representation for the number of slices is indicated in S2(a) Fig, while S2(b) and S2(C) Fig illustrate the age and sex distribution of the MDH cohort. Moreover, as the NCA class of the MDH cohort includes many samples with non COVID-19 pneumonia, we can take this class as the equivalent of the CC-CCI set CP class for our model’s training. The second dataset employed in this work was the publicly available CC-CCII dataset [29]. After quality control (e.g. removing non-standard scans such as those with a small number of slices), this set contains 3953 CT scans from 2551 subjects. The scans in CC-CCII are annotated into three classes: Normal, Common Pneumonia (CP), and COVID-19. This CC-CCII dataset was randomly split into three subsets for: (1) training (2069 subjects, 3206 scans), (2) tuning (230 subjects, 352 scans), and (3) testing (252 subjects, 395 scans). The tuning subset was used for model checkpoint and selection of the best overall model. The third dataset, MosMedData cohort, is also publicly available and is comprised of 1110 CT scans from 1110 subjects. This dataset is annotated into two classes: Non-pneumonia and COVID-19. We used the entire MosMedData cohort for external testing, that is, testing on a dataset that has not been used for model training or tuning. To evaluate our model on this cohort, we take the prediction of the COVID-19 class (for binary classification). The public datasets LIDC–IDRI31 and Tianchi-Alibaba32 (which were used for the training of the model proposed by Jin et al. [30]) were not used in our framework’s development, as these sets are for benign and malignant tumor diagnosis and they might introduce uncertainties to our framework. For the RT-PCR evaluation set, 2672 subjects, each hospitalized for more than three days, were tested 6419 times between February to October 2020. Respiratory samples including pharyngeal swabs/washing were obtained from the subjects. Nucleic acid was extracted from the samples using a QiaSymphony system (QIAGEN, Hilden, Germany) and SARS-CoV-2 RNA was detected using primer and probe sequences for screening and conformation on the basis of the sequence described by [34]. An RT-PCR diagnosis is considered correct when a patient has at least one positive test result. This project has received the ethical license of IR.SBMU.NRITLD.REC.1399.024 from the Iranian National Committee for Ethics in Biomedical Research.

Pre-processing

For all the image slices, the top 0.5% of pixels with the highest values were selected and their values were clipped to the lowest one in the range. Then, the intensities were linearly transformed to the range [0, 255]. Since we utilize models pre-trained on the ImageNet dataset [35], an additional ImageNet normalization was also carried out. We also opted to not perform any segmentation (i.e. patch extraction) in our pre-processing. This is due to the manual annotation of each dataset (like Jin et al. [30]) being time and resource consuming. On the other hand, using automated methods, such as image processing techniques and pre-trained segmentation deep learning models, would introduce further unwanted error and uncertainty to our data, and subsequently, to the model’s inference.

Deep learning method

Inspired by [26], ai-corona’s deep learning model consists of two main blocks; a feature extractor and a classifier. This is shown in Fig 1. The main challenge is mapping a 3-dimensional CT scan, which is a series of image slices, to a probability vector with a length equal to the number of classes. Another challenge is that all the scans not having the same number of slices and not all the slices being useful for diagnosis. To address this, we take the middle 50% image slices in each scan and denote the number of selected slices from each scan with S. We also experimented with other slice selection strategies (e.g. portion larger than 50%, top/bottom 50%, etc.), from which none performed better.
Fig 1

The schematic structure of ai-corona’s deep learning model.

The total number of utilized slices is labeled by S. Each selected slice is fed to the feature extractor block pipeline one by one so that we end up with S vectors, which are then transformed to a single vector via an average pooling function. Afterwards, the result is passed through a fully connected network to reach the three output neurons, corresponding to our three classes.

The schematic structure of ai-corona’s deep learning model.

The total number of utilized slices is labeled by S. Each selected slice is fed to the feature extractor block pipeline one by one so that we end up with S vectors, which are then transformed to a single vector via an average pooling function. Afterwards, the result is passed through a fully connected network to reach the three output neurons, corresponding to our three classes. As shown in Fig 1, the feature extractor block is a pipeline, receiving each slice with dimensions 512 × 512 × 3 (3 represents the number of color channels, but with all channels being exactly the same as for each image) and outputting a vector of length 1536 through an average pooling function. After all the slices have passed through the feature extractor block, we end up with S vectors. After all the S slices have passed through the feature extractor block, another average pooling is applied to the results which yields a single vector of length 1536. This pipeline manner ensures that our framework is independent of the number of slices in a CT scan, as we always end up with a single vector of length 1536 at the end of the feature extractor block. The pipeline receives different number of slices, extracts their features, and finally outputs a single vector of known length. Moreover, the use of only a single feature extractor significantly reduces the computational load of our framework, resulting in a much faster training and prediction time. Convolutional Neural Networks (CNN) were used for the feature extraction block. We experimented with different CNN models, such as DenseNet, ResNet, Xception, and EfficientNetB0 through EfficientNetB5 [36-39], taking into account their accuracy and accuracy density on the ImageNet dataset [40]. All of these models were initialized with their respective pre-trained weights on the ImageNet dataset. In the end, the EfficientNetB3 model stripped of its last dense layers was chosen as the primary feature extractor for our deep learning framework. The vector output of the EfficientNetB3 feature extraction block is then passed through the classifier block, which contains yet another average pooling layer that is connected to the model’s output neurons corresponding to the classes via a dense network of connections.ai-corona is implemented with Python 3.7 [41] and Keras 2.3 [42] framework and was trained on NVIDIA GeForce RTX 2080 Ti for 60 epochs in a total of three hours. The Pydicom [43] package was used to read the DICOM file of the cases.

Class activation maps

To generate the class activation map of an image slice, we computed a weighted average across the 1536 values of the feature vector using weights from the classification block to obtain a 10 × 10 image. The resulting map was then mapped to a color scheme, upsampled to 512 × 512 pixels, and overlaid with the original input image slice. Employing parameters from the classification block to weigh the feature vectors makes, more predictive features appear more bright. This leads to regions of the image slice that most influence the model’s prediction to appear brighter. The class activation maps highlight which pixels in an image slice are important for the model’s prediction [44].

Statistical inference

In order to quantify the reliability of our findings and the performance of our results based on the model’s detection of COVID-19 in chest CT scans, we provide a thorough comparison with expert practicing radiologists’ diagnosis. To achieve a more conservative discrimination strategy, we compute the following evaluation criteria ranging from sensitivity (true positive rate), specificity (true negative rate), F1-score, Cohen’s kappa, and finally to AUC. Moreover, the confusion matrix for all the classes of each individual study is also calculated. We set the presence of the underlying class with a positive label and the rest of the classes assigned by a negative label. Incorporating error propagation and using the Bayesian statistics, we calculate the marginalized confidence region at a 95% level for each computed quantity. The significance of diagnostic results is examined by computing the p-value statistics systematically. To achieve a conservative decision, the 3σ significance level is usually considered. Since the radiologists’ diagnosis is given by “Yes” or “No” statements for each class, it is necessary to convert the probability values computed by our model to binary values. Hence, we selected an operating point for distinguishing a given case among others and compute the true positive rate (sensitivity) versus false positive rate (1-specificity). This operating point was selected such that the model would yield a high specificity. To make more sense, as well as the other mentioned evaluation criteria, the Receiver Operating Characteristic (ROC) diagram is also estimated for our studies. All of our criteria were calculated using the scikit-learn [45] package.

Experts evaluation

Our team of experts annotated cases in the CC-CCII test set and MDH test set, with “Yes” and “No” labels for each class. To prevent a loss in experts’ diagnosis performance due to fatigue, they were asked to work on small time chunks. Their performance was then evaluated and recorded. Next, to evaluate the impact of AI assistance on the experts’ performance, after an appropriate amount of time and shuffling the sets (to prevent any remembrance), the experts re-annotated the two sets for a second time, while this time having access to the output of the model. They incorporated the model’s opinion for suspicious cases on their own authority. Their performance was evaluated and recorded again. Our team of four experts incorporates two practicing academic senior radiologists with 15 years of experience each. In our study, they’re referred to as Senior Radiologist 1 and Senior Radiologist 2. Another expert is a practicing academic radiologist with 5 years of experience, which is referred to as Junior Radiologist. The last member is a senior radiology resident, referred to as Radiology Resident. The team of experts was chosen such that a wide range of experience and background knowledge would be present for our studies, in order to make it more comprehensive.

Results

Training, evaluation, and testing datasets

To develop ai-corona, we utilized data from three different sources: (1) the MDH cohort, (2) the publicly available CC-CCII dataset [29], and (3) the publicly available MosMedData cohort. The combined data were from multiple international sites and comprised of 7184 CT scans from 5693 subjects categorized into five classes: Normal, CP, NCA, non-pneumonia, and COVID-19. For a better comparison of the diagnosis performance between RT-PCR and CT scans, the RT-PCR test records of 2672 patients in a 7-month period were gathered. The MDH and the CC-CCII data were used for training, evaluation (tuning), and testing. The MosMedData was used entirely for testing. Overall, 5322 scans from 3985 subjects were used for training and tuning, and three sets were used for testing: (1) CC-CCII test set (105 Normal, 147 CP, and 143 COVID-19 scans), (2) MDH test set (121 Normal, 117 NCA, and 119 COVID-19 scans), and (3) the entire MosMedData cohort (254 non-pneumonia and 856 COVID-19 scans). Taking into consideration the ground truth annotation of all the works involved, the CC-CCII test set was used to compare ai-corona with the models proposed by Zhang et al. [29], Jin et al. [30], and with expert radiologists. Furthermore, the MDH test set was used to compare ai-corona with the radiologists and RT-PCR. Lastly, The MosMedData cohort was used to compare ai-corona with the model proposed by Jin et al. [30].

RT-PCR sensitivity

Since the truth annotation methodology described in the Data subsection yields accurate labels, it was used to annotate a separate set for RT-PCR evaluation. This set is used to showcase the evolution of RT-PCR’s sensitivity over a period of 7 months in Fig 2 (sensitivity of each day is calculated as the average sensitivity of a 15-day period centered around that day). RT-PCR’s sensitivity oscillates in the range [0.351, 0.722]. The decrease in sensitivity to 0.351 on April 29, 2020, is due to changing the specimen obtaining method to oropharygeal wash [46]. This changed later and nasopharyngeal and oropharyngeal swabs were used. The biggest value for RT-PCR’s sensitivity in this evaluation is considered its best, denoted by RT-PCR Best.
Fig 2

Fluctuations in RT-PCR sensitivity on a daily basis.

The highest peak reaches 0.722, which is denoted as RT-PCR Best.

Fluctuations in RT-PCR sensitivity on a daily basis.

The highest peak reaches 0.722, which is denoted as RT-PCR Best.

Performance evaluation and comparison

Having three test sets, our framework’s COVID-19 diagnosis performance for the CC-CCII test set, MDH test set, and the MosMedData cohort for all the studies is evaluated (an operating point was selected for each study). The confusion matrices for our evaluation results can be found in Fig 3. Moreover, for the COVID-19 class, ROC curves are showcased in Fig 4 and a more thorough look using the four metrics is depicted in Fig 5a and 5b. At last, the complete numerical reports for this evaluation can be found in Table 2. Values denoted with “-” in the table indicate a lack of report.
Fig 3

Top row left-to-right: Confusion matrices for ai-corona, the model proposed by Zhang et al. [29], and the model proposed by Jin et al. [30] for the CC-CCI test set, respectively.

Bottom row left and middle: confusion matrices for ai-corona on the MosMedData cohort, respectively. Bottom row right: confusion matrix for the model proposed by Jin et al. [30] for the MosMedData cohort.

Fig 4

ROC curve diagrams for ai-corona on the (a) CC-CCII test set (b) MDH test set (c) MosMedData cohort.

Diagrams in the bottom row correspond to a zoom-in of their respective curves. Hollow shapes represent an expert un-aided by AI, where filled shapes represent expert with AI assistance. As RT-PCR sensitivity was not available, its sensitivity is shown as a solid line in (b).

Fig 5

Detailed comparison of all the studies using our evaluation metrics for above: CC-CCII test set, below: MDH test set.

Hollow shapes represent an expert un-aided by AI, where filled shapes represent expert with AI assistance.

Table 2

Evaluation results of all the studies with a 95% confidence interval using the metrics sensitivity, specificity, F1-score, Kappa, and AUC.

A “-” value indicates a lack of data. Reports in sections A, B, and C correspond to the CC-CCI test set, the MDH test set, and the MosMedData cohort, respectively.

Sensitivity (95% CI)Specificity (95% CI)F1-score (95% CI)Kappa (95% CI)AUC (95% CI)
A (CC-CCII test set)
ai-corona0.972(0.956, 0.988)0.968(0.954, 0.982)0.970(0.954, 0.986)0.935(0.909, 0.961)0.997(0.993, 0.999)
Zhang et al. [29]0.949-0.911---0.980(0.967, 0.990)
Jin et al. [30]0.921(0.918, 0.926)0.780(0.770–0.789)--0.921(0.918, 0.926)
Senior 10.895(0.874, 0.916)0.956(0.945, 0.967)0.908(0.892, 0.924)0.857(0.837, 0.877)-
Senior 1 + AI0.937(0.919, 0.955)0.948(0.937, 0.959)0.924(0.910, 0.938)0.88(0.860, 0.900)-
Senior 20.909(0.888, 0.930)0.917(0.903, 0.931)0.884(0.868, 0.900)0.816(0.792, 0.840)-
Senior 2 + AI0.965(0.954, 0.976)0.940(0.927, 0.953)0.932(0.920, 0.944)0.892(0.876, 0.908)-
Junior0.636(0.605, 0.667)0.897(0.882, 0.912)0.700(0.677, 0.723)0.555(0.523, 0.587)-
Junior + AI0.776(0.75, 0.802)0.913(0.898, 0.928)0.804(0.782, 0.826)0.700(0.672, 0.728)-
R. Resident0.839(0.813, 0.865)0.663(0.639, 0.687)0.690(0.670, 0.710)0.459(0.426, 0.492)-
R. Res. + AI0.853(0.826, 0.880)0.778(0.758, 0.798)0.760(0.740, 0.780)0.599(0.568, 0.630)-
B (MDH test set)
ai-corona0.924(0.895, 0.953)0.983(0.961, 1.000)0.953(0.935, 0.971)0.917(0.887, 0.947)0.989(0.984, 0.994)
RT-PCR0.722(0.661, 0.783)----
Senior 10.857(0.833, 0.881)0.979(0.963, 0.995)0.903(0.886, 0.920)0.858(0.838, 0.878)-
Senior 1 + AI0.908(0.887, 0.929)0.987(0.975, 0.999)0.939(0.927, 0.951)0.910(0.892, 0.928)-
Senior 20.899(0.874, 0.924)0.979(0.965, 0.993)0.926(0.912, 0.940)0.891(0.868, 0.914)-
Senior 2 + AI0.899(0.877, 0.921)0.992(0.983, 1.000)0.939(0.928, 0.950)0.910(0.894, 0.926)-
Junior0.765(0.738, 0.792)0.992(0.982, 1.000)0.858(0.838, 0.878)0.800(0.775, 0.825)-
Junior + AI0.857(0.833, 0.881)1.000(1.000, 1.000)0.923(0.908, 0.938)0.889(0.869, 0.909)-
R. Resident0.882(0.858, 0.906)0.92(0.898, 0.942)0.864(0.846, 0.882)0.794(0.766, 0.822)-
R. Res. + AI0.899(0.877, 0.921)0.966(0.948, 0.984)0.915(0.901, 0.929)0.873(0.853, 0.893)-
C (MosMedData cohort)
ai-corona0.939(0.924, 0.954)0.831(0.802, 0.860)--0.954(0.937, 0.971)
Jin et al. [30]0.945(0.938, 0.951)0.661(0.636, 0.686)--0.933(0.926, 0.938)

Top row left-to-right: Confusion matrices for ai-corona, the model proposed by Zhang et al. [29], and the model proposed by Jin et al. [30] for the CC-CCI test set, respectively.

Bottom row left and middle: confusion matrices for ai-corona on the MosMedData cohort, respectively. Bottom row right: confusion matrix for the model proposed by Jin et al. [30] for the MosMedData cohort.

ROC curve diagrams for ai-corona on the (a) CC-CCII test set (b) MDH test set (c) MosMedData cohort.

Diagrams in the bottom row correspond to a zoom-in of their respective curves. Hollow shapes represent an expert un-aided by AI, where filled shapes represent expert with AI assistance. As RT-PCR sensitivity was not available, its sensitivity is shown as a solid line in (b).

Detailed comparison of all the studies using our evaluation metrics for above: CC-CCII test set, below: MDH test set.

Hollow shapes represent an expert un-aided by AI, where filled shapes represent expert with AI assistance.

Evaluation results of all the studies with a 95% confidence interval using the metrics sensitivity, specificity, F1-score, Kappa, and AUC.

A “-” value indicates a lack of data. Reports in sections A, B, and C correspond to the CC-CCI test set, the MDH test set, and the MosMedData cohort, respectively. Fig 3a through Fig 3c show that ai-corona has performed better in all three classes (Normal, CP, COVID-19) compared to Zhang et al. [29] and Jin et al. [30] on the CC-CCII test set and achieves an AUC score of 0.997, sensitivity of 0.972, and specificity of 0.968 on the COVID-19 class. The confusion matrix in Fig 3d showcases our framework’s performance on the MDH test set for the three classes of Normal, NCA, and COVID-19. For this dataset, our framework gains scores of 0.989, 0.924, and 0.983 for AUC, sensitivity, and specificity, respectively. In addition, Fig 3e and 3f showcase that our framework surpasses that of proposed by Jin et al. [30] on the MosMedData cohort with an AUC of 0.954. Although both have similar sensitivities in COVID-19 diagnosis, ai-corona outperforms Jin et al.’s model in non-pneumonia diagnosis with 83.07% accuracy, reporting fewer false positives. The better diagnosis performance over the CC-CCII test set indicates that the task of diagnosing NCA from the other classes is indeed more difficult than diagnosing CP from the other classes. This due to all the different abnormalities present in the NCA class having their unique imaging features.

Comparison with experts and RT-PCR

Fig 4(a) and the top diagram of Fig 5 showcase the COVID-19 diagnosis performance of ai-corona and its comparison with that of experts for the CC-CCII test set. As shown, our framework performs better in all cases (except for the specificity of Senior radiologist 1). Furthermore, Fig 4(a) and the bottom diagram of Fig 5 showcase the same comparison, but for the MDH test set. This time, the framework performed similar to radiologists in specificity, but outperformed in the other metrics. In this comparison, 93.3% of COVID-19 cases in the MDH test set (111 of 119) were diagnosed as infected by at least one expert. Out of the other 8 that were not, our framework managed to report one and RT-PCR reported three as infected. If RT-PCR was the only criteria for the truth annotation, the overall sensitivity of radiologists would improve to 97%, which would further confirm the findings in [8]. The complete reports for these two evaluations are in sections a and b of Table 2. In Fig 4(b), the sensitivity of RT-PCR based diagnosis and CT based diagnosis is compared. The figure shows that RT-PCR Best sensitivity of 0.722 is lower than every expert diagnosing via CT. The RT-PCR Best sensitivity is an upper bound. Because if instead of testing patients hospitalized for more than three days, every COVID-19 admitted patient was tested, RT-PCR’s sensitivity would be much lower than 0.722.

Model as expert assistant

The goal of any AI assistant model is to improve the diagnosis performance of experts. For the evaluation, first, the radiologists annotate the test set. After an appropriate amount of time, the radiologist re-annotated the set for a second time while having the diagnosis of ai-corona for the entire set. The test set was also shuffled the second time to eliminate any remembrance of cases. Experts’ diagnosis performance is depicted in Fig 5. For the CC-CCII test set, all the experts (except the radiology Resident) had an improvement in their sensitivity. A significant improvement in the other metrics is also seen for everyone (except Senior 1). For the MDH test set, improvement in sensitivity can be seen for Senior 1 and Junior. Specificity only had an improvement in the Radiology Resident and remained unchanged for others. In every other evaluation criterion, the AI model had a positive impact on the experts’ performance.

Interpretation of ai-corona

To ensure that ai-corona was learning the correct imaging features, class activation maps were generated Fig 6. This is done by following the methodology described in the Introduction section. In a class activation map of a slice, more predictive areas (that hold the correct imaging features) appear brighter. Thus, the brightest areas of the class activation map correspond to regions that most influence the model’s prediction.
Fig 6

Class activation maps for ai-corona interpretation.

This highlights which pixels in the images are important for the model’s classification decision.

Class activation maps for ai-corona interpretation.

This highlights which pixels in the images are important for the model’s classification decision.

Additional evaluations

Additional evaluations were made as well, for which the an be found in the Supporting Information section. First, over the MDH test set, the performance of diagnosis between NCA and Normal classes was evaluated using the four metrics and was compared to the experts. Furthermore, all of the possible comparisons between every pair of classes were made to ensure the thoroughness and completeness of our evaluation which is showcased in S1–S6 Tables. As an example, this extra study showcased that radiologists perform better in diagnosing NCA from Normal compared to the AI model. Lastly, it is important to note the speed at which different methodologies perform diagnosis. As shown in, RT-PCR is extremely slow. Moreover, our framework is faster than the best radiologist by 25 orders of magnitude. This is showcased in Table 3.
Table 3

Diagnosis time comparison for ai-corona and radiologists on the 357 case test set.

ai-coronaSenior 1Senior 2JuniorRadiology Resident
Diagnosis Time12 min.360 min.300 min.320 min.400 min.

Conclusion and discussion

We introduce ai-corona, a radiologist-assistant deep learning framework capable of accurate COVID-19 diagnosis in chest CT scans. Our deep learning framework was developed (training and tuning) on 5322 scans, 3985 subjects, gathered from cohorts from two countries, China and Iran, and was tested against three sets; the CC-CCII test set from China (395 scans, 252 subjects), MDH test set from Iran (357 scans, 346 subjects), and the MosMedData cohort from Russia (1110 scans, 1110 subjects). Our framework was able to learn to diagnose patients infected with COVID-19, as well as being able to distinguish between COVID-19, other types of common pneumonia (CP) such as viral and bacterial, and other non COVID-19 abnormalities (NCA). Moreover, a set of 2672 subjects was used to calculate the sensitivity of RT-PCR. The use of multiple datasets, each with scans differing in the number of slices, and a lack of slice-specific labeling, presented a challenge for this work. To address this, we dynamically select the middle 50% of slices in each scan and feed them to a single EfficientNetB3-based feature extractor, which after an average pooling operator, will result in a single feature vector that will be classified. This method, alongside the use of only one 2D CNN, will not only make our framework more robust, but it will also make its predictions faster and capable of running on slower hardware. Our framework was compared to two other AI models, proposed by Zhang et al. [29] and Jin et al. [30] respectively. Its diagnosis performance is also compared to that of experts and other means of diagnosis in order to achieve a comprehensive and sensible image of the framework’s abilities. In the end, ai-corona managed to outperform the two other AI models in COVID-19 diagnosis. Our framework achieves high sensitivity, while also having a high specificity. Our framework achieved an AUC score of 0.997 on the CC-CCII test set and performed better than the models proposed by Zhang et al. [29] and Jin et al. [30] on all four metrics. On the MDH test set, ai-corona gained an AUC score of 0.989 and performed mostly better in all of the metrics compared to the experts. It is worth mentioning that for our framework, diagnosing between the COVID-19 and CP classes was easier than between COVID-19 and NCA. Yet for the experts, it was the opposite. RT-PCR, as another method of diagnosis, had a sensitivity of 0.722 at best, worse than all the experts and the AI. At last, our framework gained a 0.954 AUC score on the MosMedData cohort, which outperforms Jin et al. [30]. A complete report of these evaluations can be found in Fig 3 through Fig 5 and Table 2. In COVID-19 diagnosis, ai-corona’s impact on assisting experts’ diagnosis was evaluated, which in COVID-19 diagnosis, mostly indicates a positive improvement on at least their sensitivity or specificity. This improvement is most noticeable for the Junior radiologist and the radiology Resident. Additionally, incorporation of the class activation maps in the experts’ diagnosis can help them examine the involved regions better. On having a positive impact on experts’ diagnosis, two cases are discussed here to showcase how ai-corona made experts change their minds for good in suspicious cases. At least one expert misdiagnosed Fig 7(a)’s case as NCA at first, but upon seeing the AI’s diagnosis, correctly diagnosed as COVID-19. This expert cited seeing Peribrochovascular distribution, which is not common in COVID-19 (no subpleural distribution), as the reason for their misdiagnosis. In addition, Fig 7(b)’s case was initially misdiagnosed as COVID-19 by at least one expert, but was changed correctly to NCA when seeing the AI’s correct diagnosis. They cited that cavity, centrilobular nodule, mass, and mass-like consolidations are not commonly seen in COVID-19 pneumonia and might implicate other diagnostics.Fig 3 On the other hand, the existence of error in CT-based diagnosis, both for ai-corona and experts, encourages us to study the cause for such errors, which might lead to better and more accurate predictions, or point out any if existing fundamental flaws in CT-based diagnosis. Fig 7(a)’s case was misdiagnosed as COVID-19 by all the experts. Our framework, while correctly diagnosing for NCA, was not able to change the experts’ minds. In a consensual final report, the experts cite that Mediastinal and bilateral hilar adenopathies were seen, as well as Anterior mediastinal soft-tissue density. In addition, Diffuse bilateral interstitial infiltrations were detected with crazy paving pattern, ground glass, and traction bronchiectasis, mainly in the right lung and partial volume loss of the right lung. Also, the position of central venous catheter tip was seen in the left brachiocephalic vein.
Fig 7

(a), (b), and (c), are the chest CT scans of patients who were initially misdiagnosed by at least one radiologist but were then diagnosed correctly upon incorporating ’s correct prediction. (d) shows the chest CT scans of a patient that was misdiagnosed by ai-corona and radiologists.

(a), (b), and (c), are the chest CT scans of patients who were initially misdiagnosed by at least one radiologist but were then diagnosed correctly upon incorporating ’s correct prediction. (d) shows the chest CT scans of a patient that was misdiagnosed by ai-corona and radiologists. The success of AI in medical imaging-based diagnosis has been proven by this work and many others before it. ai-corona can positively influence an expert’s opinion and improve the speed at which the subject screening process occurs, such that it helps critical cases get the care they urgently need faster. But our work has its own drawbacks and shortcomings. Since the gathering of a dataset with better labeling (one that alongside its accurate annotations also accompanies localization and slice labels) is time and resource consuming, we decided to opt for an approach that favors robustness and is capable of learning on a simpler dataset. Developing our framework on a better dataset would certainly improve its performance. In addition, the CP class contains all kinds of conditions and diseases that cause pneumonia. As each of these conditions and diseases have their own distinct imaging features, having separate classes for them, especially Influenza-A, would improve the framework’s performance. Lastly, our framework’s learning would certainly benefit from more cases that are positive for COVID-19, yet have a negative RT-PCR result. As these cases are mostly experiencing the early stages of the infection, diagnosing them is more difficult. Moreover, classifying cases with a negative RT-PCR as non COVID-19 is illogical and their labeling protocol should be something else. In the future, approaches that do a better job incorporating clinical reports with the imaging data should be explored. In conclusion, with the individual drawbacks of diagnosing based on clinical representation, RT-PCR, and CT-based diagnosis, a method comprised of all three would definitely yield the most accurate diagnosis of COVID-19.

The cascade structure of the MDH. The number of subjects and scans in each split and set is indicated.

The preliminary dataset was cleaned, by removing abdomen and high-resolution CT scans. The train and tuning sets were labeled by two expert radiologists. The NCA and Normal classes of the test set was re-annotated by three expert radiologist (one new). The COVID-19 class are patients that meet our criteria and were hospitalized for more than three days. (TIF) Click here for additional data file.

The left panel corresponds to the distribution of image slices for cases in the MDH, the middle panel shows the distribution of age, while the right panel illustrates the sex distribution of cases in the MDH.

(TIF) Click here for additional data file.

The ROC diagram representing the performance of various pipelines for the different combinations of comparison.

The Solid black line is for ai-corona by adapting different discrimination threshold value which is used to convert the continuous probability to binary “Yes” or “No” results. The filled triangle symbols are the (1-specificity, sensitivity) for the individual clinical experts, while the filled circle symbols are for the model-assisted radiologist. The inset plots magnify the highest part of sensitivity and specificity. (TIF) Click here for additional data file.

The quantitative evaluation of ai-corona, radiologists, and AI-assisted radiologists’ performance results for differentiating between the COVID-19 class and the Normal class at a 95% confidence interval.

(PDF) Click here for additional data file.

The quantitative evaluation of ai-corona, radiologists, and AI-assisted radiologists’ performance results for differentiating between the COVID-19 class and the NCA class at a 95% confidence interval.

(PDF) Click here for additional data file.

The quantitative evaluation of ai-corona, radiologists, and AI-assisted radiologists’ performance results for differentiating between the NCA class and the other classes at a 95% confidence interval.

(PDF) Click here for additional data file.

The quantitative evaluation of ai-corona, radiologists, and AI-assisted radiologists’ performance results for differentiating between the Normal class and the other classes at a 95% confidence interval.

(PDF) Click here for additional data file.

The quantitative evaluation of ai-corona, radiologists, and AI-assisted radiologists’ performance results for differentiating between the NCA class and the Normal class at a 95% confidence interval.

(PDF) Click here for additional data file. (PDF) Click here for additional data file. 5 Mar 2021 PONE-D-21-04656 ai-corona: Deep Radiologist-Assistant for COVID-19 Diagnosis in Chest CT Scans PLOS ONE Dear Dr. Lashgari, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The manuscript had be reviewed by 2 reviewers and both reviewers were of the view that manuscript describes a technically sound piece of scientific research and recommended minor revision. However, they had made certain observations to improve your work. After thorough consideration of comments of reviewers, my decision is "minor revision". Please incorporate comments raised by both reviewers. I noted that one of the reviewers has asked for more context in the literature review, and suggested specific papers to be cited. While you may take on-board their suggested papers if you feel that they are relevant for your manuscript, or just take on-board the general suggestion for providing some more context in the literature review, there is no requirement from the journal to cite these papers Please submit your revised manuscript by Apr 19 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Gulistan Raja Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service. Whilst you may use any professional scientific editing service of your choice, PLOS has partnered with both American Journal Experts (AJE) and Editage to provide discounted services to PLOS authors. Both organizations have experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. To take advantage of our partnership with AJE, visit the AJE website (http://learn.aje.com/plos/) for a 15% discount off AJE services. To take advantage of our partnership with Editage, visit the Editage website (www.editage.com) and enter referral code PLOSEDIT for a 15% discount off Editage services.  If the PLOS editorial team finds any language issues in text that either AJE or Editage has edited, the service provider will re-edit the text for free. Upon resubmission, please provide the following: The name of the colleague or the details of the professional service that edited your manuscript A copy of your manuscript showing your changes by either highlighting them or using track changes (uploaded as a *supporting information* file) A clean copy of the edited manuscript (uploaded as the new *manuscript* file) 3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 4. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ 5. Please ensure that you refer to Figures 4, 8, 9 and 10 in your text as, if accepted, production will need this reference to link the reader to each figure. 6. We note you have included tables to which you do not refer in the text of your manuscript. Please ensure that you refer to Tables 3-8 in your text; if accepted, production will need this reference to link the reader to each Table. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper describes a CNN-based model for classification of 3D-CTs as either presenting COVID-19, having several other abnormalities, or being normal. An ImageNet pretrained EfficientNetB3 is used as a slice-wise feature extractor. Then, the features from all slices are combined by averaging and classified using a fully connected layer. The model achieves compelling results, outperforming on its own even experts helped by the model for the task of COVID-19 vs others classification. The datasets employed and the validation techniques are all sound and there is a very complete array of numerical comparisons with 4 radiologists. I recommend it for publication, although there are some comments that should be addressed: The paper requires English review. I am no native English speaker, but only in the abstract, I can find already several grammatical mistakes: “We employed three independent dataset”, “Our results show thatai-coronaoutperforms all”, “for which it gained AUC score of 0.997”, “our framework’s diagnosis capabilities was evaluated”. Line 77: The differences between NCA (non COVID-19 abnormal), non-pneumonia, and normal classes should be clarified, and also what pathologies might represent each of the first two. Line 176: Average pooling is employed to combine the final activation maps from each slice into a single activation map for the whole CT. However, this ignores any kind of spatial information contained along the z-axis, as it is all averaged. Furthermore, using average pooling might bury any positive detection among many negative ones. Have you tried using another combination method that is aware of the z-axis information (such as 3D pooling, instead of 2D, or using 3D convolutions)? Have you tried using max-pooling instead of average pooling? Also, I would have suggested to add the previous and next slice to the input of each EfficientNet feature extractor, instead of repeating the same slice three times, as way to add some z-axis information to each prediction. Line 189: Do you use EfficientNetB5 or B3? B5 is mentioned here, but B3 is mentioned in the rest of the text. Also, Figure 1 shows that the input images are of resolution 512x512, but that is not the input resolution of the EfficientNetB3, nor the B5. Reviewer #2: Minor Revision # In the introductory section, the first paragraph should discuss about COVID-19 pandemic, number cases, deaths, recovered and the taxonomy of the virus etc. # Author should stress on why their contribution stand out or achieved better result than others. # There should be bullet points on authors contribution at the end of introduction. # Author should add a paragraph on COVID-19 vaccines. # Author need to add a section on “Related work” and a table to summarized contributions in terms of dataset and performance evaluation. # Author need to add a limitations and future work. # Authors can also add these references 1. Chowdhury, M. E., Rahman, T., Khandakar, A., Mazhar, R., Kadir, M. A., Mahbub, Z. B., ... & Islam, M. T. (2020). Can AI help in screening viral and COVID-19 pneumonia?. IEEE Access, 8, 132665-132676. 2. Ibrahim, A. U., Ozsoz, M., Serte, S., Al-Turjman, F., & Yakoi, P. S. (2021). Pneumonia classification using deep learning from chest X-ray images during COVID-19. Cognitive Computation, 1-13. 3. Burki, T. K. (2020). The Russian vaccine for COVID-19. The Lancet Respiratory Medicine, 8(11), e85-e86. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Oscar José Pellicer Valero Reviewer #2: Yes: Abdullahi Umar Ibrahim PhD [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: PONE-D-21-04656.docx Click here for additional data file. 15 Apr 2021 Dear Dr. Gulistan Raja, Academic Editor PLOS ONE Thank you for giving us the opportunity to resubmit our revised manuscript “ai-corona: Deep Radiologist-Assistant for COVID-19 Diagnosis in Chest CT Scans”. Ms. No: PONE-D-21-04656'' to PLOS ONE. In this version of the manuscript, we have addressed all the criticisms of the reviewers and made changes throughout the main manuscript text. We appreciate the time and effort that you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We are grateful for the reviewer’s careful reading of the paper and their excellent criticisms and recommendations to improve the paper. The criticisms had led to significant improvements made in the manuscript. We also revised the title of the manuscript as follows: “ai-corona: Radiologist-Assistant Deep Learning Framework for COVID-19 Diagnosis in Chest CT Scans”. Below, we detail how we have responded to the two reviewers’ criticisms. Sincerely, Reza Lashgari, Ph.D ----------------------------------------------------------------------------------- Summary of the changes: All changes in the manuscript were highlighted in blue color. • The manuscript text was justified. • The entire manuscript has been carefully edited by a native English speaker to improve grammar and readability. • References, numbering, and the references section were reviewed and double-checked. • We incorporated the data availability section into the final manuscript. • Zero padding to 3 decimal points for coherence was added in Table 3. To prevent visual congestion on the "supporting information" version, blue highlighting was left out. • Changed the positioning of some of the tables and figures for better visual presentation. • We added the author contributions in the end of manuscript. Manuscript Title: We revised the manuscript tile as “ai-corona: Radiologist-Assistant Deep Learning Framework for COVID-19 Diagnosis in Chest CT Scans”. It is highly appreciated if the respected editor and reviewers accept the revised title as we changed in this revision. Changes to the Affiliations: The affiliations were revised and the symbol was removed from the authors D. Rahmati and A. Kiani. Changes to the Abstract: We significantly revised the abstract according to the reviewer’s comment. The new abstract follows the same semantic and sentencing structure as the previous one but written in more eloquent English letting a broader range of audiences understand the science behind our article. Changes to the Introduction Section: • A sentence to discuss the COVID-19 pandemic, number of cases, deaths, and etc. was added to the first paragraph with appropriate citations. • Added a sentence to discuss COVID-19 vaccines and the challenges with appropriate citation. • The Paper's contributions and achievements were summarized at the end of the introduction section. • The manuscript has been carefully edited by a native English speaker to improve grammar and eloquence. Changes to the Materials and Methods Section: • Added reference to the figures. • Addressed the individual reviewer points. • The entire manuscript has been carefully edited by a native English speaker to improve grammar and readability. Changes to the Results Section: • We included references to the figures and tables. • All of the figures and tables in the Supporting Information section are now referenced. • Added a diagnosis time comparison table and its appropriate referencing. • The paper has been carefully edited by a native English speaker to improve grammar and readability. Changes to the Conclusions Section: The conclusion section has been carefully edited by a native English speaker to improve grammar and readability. Changes to the Supporting Information Section: • Slight changes were made to the Supporting Fig 1. • Captions for figures and tables were modified. • Multiple grammatical corrections were made. Response to reviewers comments: We sincerely appreciate all your valuable comments and suggestions, which helped us to improve the quality of the article. Our responses to the respected reviewer’s comments are described below in a point-to-point manner. The changes, suggested by the reviewers, have been described to the new version of the manuscript (highlighted in blue color within the document). Response to Reviewer #1: The paper describes a CNN-based model for classification of 3D-CTs as either presenting COVID-19, having several other abnormalities, or being normal. An ImageNet pretrained EfficientNetB3 is used as a slice-wise feature extractor. Then, the features from all slices are combined by averaging and classified using a fully connected layer. The model achieves compelling results, outperforming on its own even experts helped by the model for the task of COVID-19 vs others classification. The datasets employed and the validation techniques are all sound and there is a very complete array of numerical comparisons with 4 radiologists. I recommend it for publication, although there are some comments that should be addressed: Thank you for the excellent comments and constructive review. The paper requires English review. I am no native English speaker, but only in the abstract, I can find already several grammatical mistakes: “We employed three independent dataset”, “Our results show thatai-coronaoutperforms all”, “for which it gained AUC score of 0.997”, “our framework’s diagnosis capabilities was evaluated”. Thank you. The manuscript has been carefully edited by a native speaker. Line 77: The differences between NCA (non COVID-19 abnormal), non-pneumonia, and normal classes should be clarified, and also what pathologies might represent each of the first two. We thank the reviewer for their feedback. We agree with the reviewer’s comment regarding the lack of clarity in the classification of covid categories and as a result, we have elaborated upon these differences more carefully. For the difference between the NCA and the non-pneumonia classes, we added the following explanation to the Data subsection: "having the NCA class is crucial, as it includes abnormalities such as atelectasis, cardiomegaly, lung emphysematous, hydropneumothorax, pneumothorax, cardiopulmonary edema, cavity, fibrocavitary changes, fibrobronchiectatic, mass, and nodule." Line 176: Average pooling is employed to combine the final activation maps from each slice into a single activation map for the whole CT. However, this ignores any kind of spatial information contained along the z-axis, as it is all averaged. Furthermore, using average pooling might bury any positive detection among many negative ones. Have you tried using another combination method that is aware of the z-axis information (such as 3D pooling, instead of 2D, or using 3D convolutions)? Thank you. The reviewer’s concern about the neglection of z-axis information is valid. We also were aware of this reduction and accordingly we compared our algorithm with other combination methods. In the initial phases of our model development prototyping, we experimented with 3D-CNN models that extract the z-axis information such as 3D-ResNet. The results were atrocious and this method was discarded. Have you tried using max-pooling instead of average pooling? We initially used max-pooling in our model's last layer (following the same as the MRNet paper). But our experiments indicate that average pooling has better results comparing to max-pooling in our algorithm so we only reported the algorithm with average-pooling. As we know the Average-pooling encourages the network to identify the complete extent of the object, whereas max-pooling restricts that to only the very important features, and might miss out on some details. Based on this intuition we speculate the class information is encoded in population-pixel level so the average-pooling performs better in our experiments. Also, I would have suggested to add the previous and next slice to the input of each EfficientNet feature extractor, instead of repeating the same slice three times, as way to add some z-axis information to each prediction. Thank you for the suggestion. For the model input size, we experimented with two settings; EfficientNet's standard input size and the scan's default size of 512*512. The second one was shown significant results and better descriptions. It is worth mentioning that our model's input is a 3-channel grayscale image that is the standard practice of many experts in the literature. As we were satisfied with our results and we didn't have time to experiment with other methods, we decided to commit to this practice. Line 189: Do you use EfficientNetB5 or B3? B5 is mentioned here, but B3 is mentioned in the rest of the text. Also, Figure 1 shows that the input images are of resolution 512x512, but that is not the input resolution of the EfficientNetB3, nor the B5. Thank you for the important point and sorry for the error typo. The employed feature extractor is based on EfficientNetB3, but was mistakenly typed as EfficientNetB5. This is fixed now. Response to Reviewer #2: Thank you for raising the following points and constructive review and comments: # In the introductory section, the first paragraph should discuss about COVID-19 pandemic, number of cases, deaths, recovered and the taxonomy of the virus etc. You have raised an important point here and have incorporated your suggestion throughout the manuscript. Accordingly, we briefly discussed the COVID-19 pandemic, number of cases, deaths, and vaccination in the first paragraph of the introduction. Please see the highlighted blue color in the main text of the revised manuscript. # Author should stress on why their contribution stand out or achieved better result than others. In all the tables, the best results were made bold and we also added a paragraph at the end of the introduction. # There should be bullet points on authors contribution at the end of introduction. Thank you. We reimplemented the author’s contribution section as you mentioned. # Author should add a paragraph on COVID-19 vaccines. Thanks for raising this point. Following the reviewer’s suggestion and the global attempt in the development of the COVID-19 vaccines and their effect on the treatment of newly infected subjects, we briefly added a sentence in the first paragraph of the introduction section elaborating upon this issue. Particularly with the low protection capabilities of existing vaccines, we claim the development of radiologist assistance technologies for detection and treatment of COVID-19 are still essential. # Author need to add a section on “Related work” and a table to summarized contributions in terms of dataset and performance evaluation. Thank you. We also believe the contribution of each data set should be reported individually, accordingly to the introduction section, the related works were sufficiently discussed. Also, the results for the two similar works were present in Table 2. # Author need to add a limitations and future work. Our work's limitations and future directions were described in the conclusion and discussion. # Authors can also add these references 1. Chowdhury, M. E., Rahman, T., Khandakar, A., Mazhar, R., Kadir, M. A., Mahbub, Z. B., ... & Islam, M. T. (2020). Can AI help in screening viral and COVID-19 pneumonia?. IEEE Access, 8, 132665-132676. 2. Ibrahim, A. U., Ozsoz, M., Serte, S., Al-Turjman, F., & Yakoi, P. S. (2021). Pneumonia classification using deep learning from chest X-ray images during COVID-19. Cognitive Computation, 1-13. 3. Burki, T. K. (2020). The Russian vaccine for COVID-19. The Lancet Respiratory Medicine, 8(11), e85-e86. Thank you. We included these two references (1 and 2) in the introduction and reference sections. Submitted filename: final_Response to Reviewers_PLOS One.pdf Click here for additional data file. 19 Apr 2021 ai-corona: Radiologist-Assistant Deep Learning Framework for COVID-19 Diagnosis in Chest CT Scans PONE-D-21-04656R1 Dear Dr. Lashgari, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Gulistan Raja Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 27 Apr 2021 PONE-D-21-04656R1 ai-corona: Radiologist-Assistant Deep Learning Framework for COVID-19 Diagnosis in Chest CT Scans Dear Dr. Lashgari: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Gulistan Raja Academic Editor PLOS ONE
  22 in total

1.  The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans.

Authors:  Samuel G Armato; Geoffrey McLennan; Luc Bidaut; Michael F McNitt-Gray; Charles R Meyer; Anthony P Reeves; Binsheng Zhao; Denise R Aberle; Claudia I Henschke; Eric A Hoffman; Ella A Kazerooni; Heber MacMahon; Edwin J R Van Beeke; David Yankelevitz; Alberto M Biancardi; Peyton H Bland; Matthew S Brown; Roger M Engelmann; Gary E Laderach; Daniel Max; Richard C Pais; David P Y Qing; Rachael Y Roberts; Amanda R Smith; Adam Starkey; Poonam Batrah; Philip Caligiuri; Ali Farooqi; Gregory W Gladish; C Matilda Jude; Reginald F Munden; Iva Petkovska; Leslie E Quint; Lawrence H Schwartz; Baskaran Sundaram; Lori E Dodd; Charles Fenimore; David Gur; Nicholas Petrick; John Freymann; Justin Kirby; Brian Hughes; Alessi Vande Casteele; Sangeeta Gupte; Maha Sallamm; Michael D Heath; Michael H Kuhn; Ekta Dharaiya; Richard Burns; David S Fryd; Marcos Salganicoff; Vikram Anand; Uri Shreter; Stephen Vastagh; Barbara Y Croft
Journal:  Med Phys       Date:  2011-02       Impact factor: 4.071

2.  SARS-CoV-2 detection in different respiratory sites: A systematic review and meta-analysis.

Authors:  Abbas Mohammadi; Elmira Esmaeilzadeh; Yijia Li; Ronald J Bosch; Jonathan Z Li
Journal:  EBioMedicine       Date:  2020-07-24       Impact factor: 8.143

3.  Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China.

Authors:  Chaolin Huang; Yeming Wang; Xingwang Li; Lili Ren; Jianping Zhao; Yi Hu; Li Zhang; Guohui Fan; Jiuyang Xu; Xiaoying Gu; Zhenshun Cheng; Ting Yu; Jiaan Xia; Yuan Wei; Wenjuan Wu; Xuelei Xie; Wen Yin; Hui Li; Min Liu; Yan Xiao; Hong Gao; Li Guo; Jungang Xie; Guangfa Wang; Rongmeng Jiang; Zhancheng Gao; Qi Jin; Jianwei Wang; Bin Cao
Journal:  Lancet       Date:  2020-01-24       Impact factor: 79.321

4.  COVID-19 patients and the radiology department - advice from the European Society of Radiology (ESR) and the European Society of Thoracic Imaging (ESTI).

Authors:  Marie-Pierre Revel; Anagha P Parkar; Helmut Prosch; Mario Silva; Nicola Sverzellati; Fergus Gleeson; Adrian Brady
Journal:  Eur Radiol       Date:  2020-04-20       Impact factor: 5.315

5.  Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR.

Authors:  Victor M Corman; Olfert Landt; Marco Kaiser; Richard Molenkamp; Adam Meijer; Daniel Kw Chu; Tobias Bleicker; Sebastian Brünink; Julia Schneider; Marie Luisa Schmidt; Daphne Gjc Mulders; Bart L Haagmans; Bas van der Veer; Sharon van den Brink; Lisa Wijsman; Gabriel Goderski; Jean-Louis Romette; Joanna Ellis; Maria Zambon; Malik Peiris; Herman Goossens; Chantal Reusken; Marion Pg Koopmans; Christian Drosten
Journal:  Euro Surveill       Date:  2020-01

6.  Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning.

Authors:  Nicolas Coudray; Paolo Santiago Ocampo; Theodore Sakellaropoulos; Navneet Narula; Matija Snuderl; David Fenyö; Andre L Moreira; Narges Razavian; Aristotelis Tsirigos
Journal:  Nat Med       Date:  2018-09-17       Impact factor: 53.440

7.  Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet.

Authors:  Nicholas Bien; Pranav Rajpurkar; Robyn L Ball; Jeremy Irvin; Allison Park; Erik Jones; Michael Bereket; Bhavik N Patel; Kristen W Yeom; Katie Shpanskaya; Safwan Halabi; Evan Zucker; Gary Fanton; Derek F Amanatullah; Christopher F Beaulieu; Geoffrey M Riley; Russell J Stewart; Francis G Blankenberg; David B Larson; Ricky H Jones; Curtis P Langlotz; Andrew Y Ng; Matthew P Lungren
Journal:  PLoS Med       Date:  2018-11-27       Impact factor: 11.069

8.  Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists.

Authors:  Pranav Rajpurkar; Jeremy Irvin; Robyn L Ball; Kaylie Zhu; Brandon Yang; Hershel Mehta; Tony Duan; Daisy Ding; Aarti Bagul; Curtis P Langlotz; Bhavik N Patel; Kristen W Yeom; Katie Shpanskaya; Francis G Blankenberg; Jayne Seekins; Timothy J Amrhein; David A Mong; Safwan S Halabi; Evan J Zucker; Andrew Y Ng; Matthew P Lungren
Journal:  PLoS Med       Date:  2018-11-20       Impact factor: 11.069

9.  Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography.

Authors:  Kang Zhang; Xiaohong Liu; Jun Shen; Zhihuan Li; Ye Sang; Xingwang Wu; Yunfei Zha; Wenhua Liang; Chengdi Wang; Ke Wang; Linsen Ye; Ming Gao; Zhongguo Zhou; Liang Li; Jin Wang; Zehong Yang; Huimin Cai; Jie Xu; Lei Yang; Wenjia Cai; Wenqin Xu; Shaoxu Wu; Wei Zhang; Shanping Jiang; Lianghong Zheng; Xuan Zhang; Li Wang; Liu Lu; Jiaming Li; Haiping Yin; Winston Wang; Oulan Li; Charlotte Zhang; Liang Liang; Tao Wu; Ruiyun Deng; Kang Wei; Yong Zhou; Ting Chen; Johnson Yiu-Nam Lau; Manson Fok; Jianxing He; Tianxin Lin; Weimin Li; Guangyu Wang
Journal:  Cell       Date:  2020-05-04       Impact factor: 41.582

Review 10.  COVID-19 vaccines: where we stand and challenges ahead.

Authors:  Guido Forni; Alberto Mantovani
Journal:  Cell Death Differ       Date:  2021-01-21       Impact factor: 15.828

View more
  14 in total

1.  Challenges of Multiplex Assays for COVID-19 Research: A Machine Learning Perspective.

Authors:  Paul C Guest; David Popovic; Johann Steiner
Journal:  Methods Mol Biol       Date:  2022

2.  An Analysis of New Feature Extraction Methods Based on Machine Learning Methods for Classification Radiological Images.

Authors:  Firoozeh Abolhasani Zadeh; Mohammadreza Vazifeh Ardalani; Ali Rezaei Salehi; Roza Jalali Farahani; Mandana Hashemi; Adil Hussein Mohammed
Journal:  Comput Intell Neurosci       Date:  2022-05-25

Review 3.  Application of machine learning in CT images and X-rays of COVID-19 pneumonia.

Authors:  Fengjun Zhang
Journal:  Medicine (Baltimore)       Date:  2021-09-10       Impact factor: 1.817

4.  The application of a deep learning system developed to reduce the time for RT-PCR in COVID-19 detection.

Authors:  Yoonje Lee; Yu-Seop Kim; Da-In Lee; Seri Jeong; Gu-Hyun Kang; Yong Soo Jang; Wonhee Kim; Hyun Young Choi; Jae Guk Kim; Sang-Hoon Choi
Journal:  Sci Rep       Date:  2022-01-24       Impact factor: 4.379

5.  Statistical analysis of COVID-19 infection severity in lung lobes from chest CT.

Authors:  Mehdi Yousefzadeh; Mozhdeh Zolghadri; Masoud Hasanpour; Fatemeh Salimi; Ramezan Jafari; Mehran Vaziri Bozorg; Sara Haseli; Abolfazl Mahmoudi Aqeel Abadi; Shahrokh Naseri; Mohammadreza Ay; Mohammad-Reza Nazem-Zadeh
Journal:  Inform Med Unlocked       Date:  2022-04-01

6.  ADA-COVID: Adversarial Deep Domain Adaptation-Based Diagnosis of COVID-19 from Lung CT Scans Using Triplet Embeddings.

Authors:  Mehrad Aria; Esmaeil Nourani; Amin Golzari Oskouei
Journal:  Comput Intell Neurosci       Date:  2022-02-08

Review 7.  Role of Artificial Intelligence in COVID-19 Detection.

Authors:  Anjan Gudigar; U Raghavendra; Sneha Nayak; Chui Ping Ooi; Wai Yee Chan; Mokshagna Rohit Gangavarapu; Chinmay Dharmik; Jyothi Samanth; Nahrizul Adib Kadri; Khairunnisa Hasikin; Prabal Datta Barua; Subrata Chakraborty; Edward J Ciaccio; U Rajendra Acharya
Journal:  Sensors (Basel)       Date:  2021-12-01       Impact factor: 3.576

8.  A Meta-Analysis of Computerized Tomography-Based Radiomics for the Diagnosis of COVID-19 and Viral Pneumonia.

Authors:  Yung-Shuo Kao; Kun-Te Lin
Journal:  Diagnostics (Basel)       Date:  2021-05-29

9.  Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images.

Authors:  Erdi Acar; Engin Şahin; İhsan Yılmaz
Journal:  Neural Comput Appl       Date:  2021-07-29       Impact factor: 5.102

10.  Deep Ensemble Model for COVID-19 Diagnosis and Classification Using Chest CT Images.

Authors:  Mahmoud Ragab; Khalid Eljaaly; Nabil A Alhakamy; Hani A Alhadrami; Adel A Bahaddad; Sayed M Abo-Dahab; Eied M Khalil
Journal:  Biology (Basel)       Date:  2021-12-29
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.