Literature DB >> 33372243

The usage of deep neural network improves distinguishing COVID-19 from other suspected viral pneumonia by clinicians on chest CT: a real-world study.

Qiuchen Xie1, Yiping Lu1, Xiancheng Xie2, Nan Mei1, Yun Xiong3, Xuanxuan Li1, Yangyong Zhu3, Anling Xiao4, Bo Yin5.   

Abstract

OBJECTIVES: Based on the current clinical routine, we aimed to develop a novel deep learning model to distinguish coronavirus disease 2019 (COVID-19) pneumonia from other types of pneumonia and validate it with a real-world dataset (RWD).
METHODS: A total of 563 chest CT scans of 380 patients (227/380 were diagnosed with COVID-19 pneumonia) from 5 hospitals were collected to train our deep learning (DL) model. Lung regions were extracted by U-net, then transformed and fed to pre-trained ResNet-50-based IDANNet (Identification and Analysis of New covid-19 Net) to produce a diagnostic probability. Fivefold cross-validation was employed to validate the application of our model. Another 318 scans of 316 patients (243/316 were diagnosed with COVID-19 pneumonia) from 2 other hospitals were enrolled prospectively as the RWDs to testify our DL model's performance and compared it with that from 3 experienced radiologists.
RESULTS: A three-dimensional DL model was successfully established. The diagnostic threshold to differentiate COVID-19 and non-COVID-19 pneumonia was 0.685 with an AUC of 0.906 (95% CI: 0.886-0.913) in the internal validation group. In the RWD cohort, our model achieved an AUC of 0.868 (95% CI: 0.851-0.876) with the sensitivity of 0.811 and the specificity of 0.822, non-inferior to the performance of 3 experienced radiologists, suggesting promising clinical practical usage.
CONCLUSIONS: The established DL model was able to achieve accurate identification of COVID-19 pneumonia from other suspected ones in the real-world situation, which could become a reliable tool in clinical routine. KEY POINTS: • In an internal validation set, our DL model achieved the best performance to differentiate COVID-19 from non-COVID-19 pneumonia with a sensitivity of 0.836, a specificity of 0.800, and an AUC of 0.906 (95% CI: 0.886-0.913) when the threshold was set at 0.685. • In the prospective RWD cohort, our DL diagnostic model achieved a sensitivity of 0.811, a specificity of 0.822, and AUC of 0.868 (95% CI: 0.851-0.876), non-inferior to the performance of 3 experienced radiologists. • The attention heatmaps were fully generated by the model without additional manual annotation and the attention regions were highly aligned with the ROIs acquired by human radiologists for diagnosis.

Entities:  

Keywords:  COVID-19; Deep learning; Differential diagnosis

Mesh:

Year:  2020        PMID: 33372243      PMCID: PMC7769567          DOI: 10.1007/s00330-020-07553-7

Source DB:  PubMed          Journal:  Eur Radiol        ISSN: 0938-7994            Impact factor:   5.315


Introduction

The newly emerging coronavirus disease (COVID-19, named by WHO) has spread globally and brought about 165,000 deaths and huge economic loss [1, 2]. This is the third zoonotic coronavirus breakout in the twenty-first century and has become a daunting challenge to human beings [3]. With the rapid spread in a variety of countries, new requirements for epidemic prevention and control are put forward [4-6]. Nowadays, the diagnosis of COVID-19 totally depends on a SARS-CoV-2 virus–specific reverse transcriptase-polymerase chain reaction (RT-PCR) test. New methods were developed or under development [7-9]. Chest computed tomography (CT) is important in the diagnosis and treatment of lung diseases including viral pneumonia. Compared with molecular diagnostic testing, CT scanning has the advantages of a faster turnaround time, more detailed information related to pathology, and quantitative measurement of lesion size and lung involvement, which may have important implications for prognosis [10]. The subpleural distributed ground-glass opacities (GGOs) and “crazy paving” signs were reported by several papers to be the typical findings in COVID-19 pneumonia patients [10, 11]. However, there are no unique manifestations of COVID-19 pneumonia on CT scans. Although the Fleischner Society has published a guideline to help radiologists identify the typical features of COVID-19 pneumonia, so far there are no high-level evidence-based diagnostic tests to clarify the diagnostic efficiency of such features acquired by radiologists [12]. These non-quantifiable radiological findings were too subjective to establish a diagnostic criterion of COVID-19 pneumonia based on human-perceived CT findings [13, 14]. In recent years, deep learning (DL) has exhibited promising potential in automatic diagnosis and differential diagnosis of various diseases [15-17]. There have been lots of studies which take advantage of convolutional neural network (CNN) to solve medical problems, such as pneumonia detection and classification, and have outperformed not only the traditional machine learning but also human benchmarks applied in previous studies [15-19]. Several new DL models have been developed to make an accurate diagnosis of COVID-19 pneumonia based on chest CT images [20-22]. However, few prospective deep learning studies or randomized trials exist in this field. Most independent datasets to test DL models are likely to have a high risk of bias [23]. It is important to validate the generalization ability of DL models by real-world dataset (RWD) which could really help to realize the transformation from the academy to clinical practice [24, 25]. In this study, we attempted to construct a novel deep learning model to distinguish COVID-19 pneumonia from all suspected COVID-19 pneumonia and validated it with an RWD to testify its application value in clinical routine.

Materials and methods

Our institutional review board approved this multi-center retrospective study and waived the requirement of written informed consent. De-identified data were used to prevent any leak of patient’s privacy. The workflow is depicted in Fig. 1.
Fig. 1

The workflow of the whole study

The workflow of the whole study

Patient characteristics for model-training group

To establish our artificial intelligence COVID-19 classification model, from Jan. 1 to March 18, 2020, 563 chest CT exams from 380 patients were enrolled in the model-training group. CT scans were selected from 5 institutions in Anhui, Zhejiang Province, and Shanghai which met the following criteria: (1) suspected viral pneumonia manifestations presented on chest CT scans including single or scattered GGO or GGO-predominant density, (2) laboratory tests and RT-PCR tests were taken to clarify the pathogen of pneumonia, (3) no significant artifacts observed. Fivefold cross-validation was used for hyperparameter fine-tuning and model evaluation.

Patient characteristics for real-world data

To address regional variations and general applicability of our DL diagnostic model, the performance was tested in a real-world cohort from two institutions in a prospective fashion: one from the epicenter Hubei, China (City of Wuhan), and the other from the non-epidemic areas in China (City of Shanghai). The inclusion criteria for the RWD cohort were listed as follows: (1) suspected COVID-19 manifestations presented on chest CT scans including GGO or GGO-predominant density; (2) no significant artifacts observed. After being reported as suspected COVID-19 by radiologists, these patients were visited by epidemiologists in the hospital based on clinical information, laboratory, and radiological results, then RT-PCR tests were taken for final diagnosis. We consecutively enrolled patients (n = 3416) who took CT scans in INSTITUTION ONE (Huashan Hospital, representing non-epidemic area) from Jan 11 to April 11, 2020, and all patients (n = 328) who took CT scan in INSTITUTION TWO (Wuhan Fangcang Hospital, representing epidemic area) from Feb. 21 to March 8, 2020. Among them, a total of 316 patients met our criteria and were consecutively enrolled in our RWD.

CT scanning protocol

A total of 54 CT scans of 52 patients from Institution train-A (Huashan North Hospital) were imaged with a 16-section CT scanner (uCT 510, UIH). Six CT scans of 6 patients from Institution train-B (Taizhou People’s Hospital) were imaged with a 16-section CT scanner (LightSpeed CT, GE Medical System). A total of 58 CT scans of 58 patients from Institution train-C (Huashan East Hospital) were imaged with a 64-section CT scanner (Aquilion Prime, Toshiba Medical Systems). In total, 375 CT scans of 197 patients from Institution train-D (Fuyang No. 2 People’s Hospital) were imaged with a 64-section CT scanner (Aquilion 64, Toshiba Medical Systems). Seventy CT scans of 70 patients from Institution train-E (Ma’Anshan No. 4 People’s Hospital) were imaged with a 64-section CT scanner (Siemens Somatom Sensation). A total of 85 scans of 83 patients from Institution test-A were imaged with a 64-section CT scanner (Discovery CT, GE Medical System). A total of 233 scans of 233 cases from Institution test-B were imaged with a 16-section CT scanner (uCT 550, UIH, China). Images were photographed at the lung (window width, 1500 HU; window level, − 500 HU) and mediastinal (window width, 320 HU; window level, 40 HU) windows with 5-mm thickness.

Deep learning model

We utilized a 3D DL framework to distinguish COVID-19 from other suspected viral pneumonia by clinicians, referring to IDANNet. It could effectively extract 2D local features and 3D global features. The IDANNet used ResNet50 as the backbone to take CT slices as input and extract features for each slice. Then, the extracted slice features were fed into a feature fusion layer to capture sequence dependency following a max-pooling layer. The feature fusion layer consisted of two-layer CNN. The final extracted features were fed into a dense layer following SoftMax activation to generate the probability for COVID-19 pneumonia (Fig. 2).
Fig. 2

The illustration of the network architectures of our proposed deep learning (DL) model, including U-net and COVIDNet. a U-net is composed of a two-stage segmentation module for acceleration. In the first stage, we down-sampled the input image to a 128 × 128 level and segmented the lung field from the image, as the patterns of lung fields were easily learned at a relatively low resolution. In the second stage, we first calculated the bounding box with the lung field segmentation results. The key region was cropped from the original input image and resized it to a 256 × 256 level as the input for the second stage segmentation model. b The 3D classification networks (COVIDNet) were used in our COVID-19 diagnosis system. It is a convolutional neural network using ResNet50 as the backbone. A series of CT images were fed into COVIDNet to generate feature maps following the feature fusion layer. The feature fusion layer consists of 2 convolution layers. The final extracted features were fed into a dense layer and SoftMax activation to generate the prediction for COVID-19 pneumonia

The illustration of the network architectures of our proposed deep learning (DL) model, including U-net and COVIDNet. a U-net is composed of a two-stage segmentation module for acceleration. In the first stage, we down-sampled the input image to a 128 × 128 level and segmented the lung field from the image, as the patterns of lung fields were easily learned at a relatively low resolution. In the second stage, we first calculated the bounding box with the lung field segmentation results. The key region was cropped from the original input image and resized it to a 256 × 256 level as the input for the second stage segmentation model. b The 3D classification networks (COVIDNet) were used in our COVID-19 diagnosis system. It is a convolutional neural network using ResNet50 as the backbone. A series of CT images were fed into COVIDNet to generate feature maps following the feature fusion layer. The feature fusion layer consists of 2 convolution layers. The final extracted features were fed into a dense layer and SoftMax activation to generate the prediction for COVID-19 pneumonia More specifically, given that a CT study consists of a series of CT slices, we first preprocessed them and extracted the lung regions using U-net segmentation which was trained on kaggle dataset (https://www.kaggle.com/kmader/finding-lungs-in-ct-data). We augmented the training set with a random horizontal flip, random rotation, random scale, random translation, and random elastic transformation. The main code is available at https://github.com/LittleRedHat/COVID-19. Performance in the training group was calculated as the mean value in five random groupings. Patients in the RWD group were used to testify the performance of our DL model.

Radiologist evaluation

In order to compare the performance of our AI model with the top human radiology experts, three senior experienced radiologists who were blinded to RT-PCR results were recruited and reviewed all de-identified chest CT images in the RWD group and scored each suspected case as COVID-19 or non-COVID-19 viral pneumonia. Information about the radiologists, including years in practice, average review time per case, cardiothoracic imaging fellowship, and COVID-19-specific training experience, is shown in Table S1.

Statistical analysis

All statistical analyses were performed with PyCharm IDE (version 3.5; JetBrains). The Shapiro-Wilk test was used to evaluate the distribution type and Bartlett’s test was used to evaluate the homogeneity of variance. Normally distributed data were expressed as mean ± standard deviation. Non-normally distributed data and ordinal data were expressed as median (1/4–1/3 quartile range). Categorical variables were summarized as counts and percentages. Comparisons of quantitative data were evaluated using the Mann-Whitney U test and Wilcoxon test. Comparisons of categorized data were evaluated by the chi-square test and Fisher test. A p value of < 0.05 was defined as with statistical significance. Missing data were omitted. The sensitivity and specificity for COVID-19 detection were calculated. The receiver operating characteristic (ROC) curve was plotted and the area under the curve (AUC) was calculated with the 95% confidence intervals (CIs) based on DeLong’s method.

Results

Study population characteristics

A total of 881 CT images from 696 patients, which were suspected to be COVID-19 pneumonia by radiologists, were included in our study to differentiate COVID-19 pneumonia patients from other non-COVID pneumonia patients. Among these patients, 470 were confirmed as COVID-19 and 226 were excluded by twice negative RT-PCR results. The distribution of COVID-19 and non-COVID-19 patients in model-training and RWD groups was different. Of 380 patients, 227 were COVID-19 confirmed in the model-training group and 243/316 were proved to be infected by COVID-19 in the RWD group. Despite the differences between distributions, the COVID-19 patients had lower white blood cell count in both groups. Other clinical features did not show a significant difference between these 2 groups. Detailed information is summarized in Tables 1 and 2.
Table 1

Clinical characteristics of patients in the study

CharacteristicsAll patients (n = 696)COVID-19 patients (n = 470)Non-COVID patients (n = 226)p value
Age46.90 ± 15.6544.03 ± 14.6252.88 ± 17.730.002
Gender, male/female383/313275/195108/1180.001
Number of CT scans881634247/
Epidemiological history
Yes/No361/335307/16354/172< 0.001
Symptom
Yes/No650/42433/37212/100.138
Underlying comorbidity
Yes/No193/503112/35881/1450.001
Laboratory test
White blood cell count, mean ± sd (× 109/L)7.01 ± 3.525.52 ± 2.3110.10 ± 4.90< 0.001*
Lymphocyte count, mean ± sd (× 109/L)1.25 ± 0.601.13 ± 0.481.49 ± 0.89< 0.001*
Lactate dehydrogenase, mean ± sd (U/L)411.22 ± 214.86262.35 ± 96.62673.52 ± 454.87< 0.001*
C-reactive protein, mean ± sd (mg/L)34.75 ± 36.9128.04 ± 34.9348.69 ± 44.28< 0.001*
Procalcitonin, median mean ± sd, (ng/mL)1.24 ± 2.990.09 ± 0.373.62 ± 8.16< 0.001*
Final diagnosis
COVID-19470470//
Bacterial infection106/106
Viral infection53/53
Others67/67

The italics indicate significant p values

*Wilcoxon rank-sum test and Fisher exact test were used if non-normal distribution or heterogenous variance of the data was detected

Table 2

Clinical characteristics in the model-training group and real-world data (RWD) group

CharacteristicsAll patients (n = 696)Model-training group (n = 380)RWD group (n = 316)p value
Age46.90 ± 15.6544.03 ± 12.9650.35 ± 16.29< 0.001
Gender, male/female383/313205/175177/1390.219
Number of CT scans881563318/
Final diagnosis
COVID-19470227243< 0.001
Bacterial infection1066343
Viral infection533617
Others675413

The italics indicate significant p values

Wilcoxon rank-sum test and Fisher exact test were used if non-normal distribution or heterogenous variance of the data was detected

Clinical characteristics of patients in the study The italics indicate significant p values *Wilcoxon rank-sum test and Fisher exact test were used if non-normal distribution or heterogenous variance of the data was detected Clinical characteristics in the model-training group and real-world data (RWD) group The italics indicate significant p values Wilcoxon rank-sum test and Fisher exact test were used if non-normal distribution or heterogenous variance of the data was detected

Model performance

Internal validation

The internal validation set composed of a total of 728 slices from 40 COVID-19 and 21 non-COVID-19 patients achieved the best performance. When the threshold was set at 0.685, our DL model achieved the best performance to differentiate COVID-19 from non-COVID-19 pneumonia with a sensitivity of 0.836, a specificity of 0.800, and an AUC of 0.906 (95% CI: 0.886–0.913) (Table 3, Fig. 3).
Table 3

Model performance in the internal validation group and RWD group

GroupNumber of casesNumber of COVID-19Accuracy (%)AUCSensitivity (%)Specificity (%)
Internal validation group6140820.9058480
RWD group316243810.8688182

AUC area under the curve

Fig. 3

The performance of our DL model in the internal validation group and the real-world dataset (RWD) group. ROC curves and confusion matrixes were listed in the upper and lower part of the figure

Model performance in the internal validation group and RWD group AUC area under the curve The performance of our DL model in the internal validation group and the real-world dataset (RWD) group. ROC curves and confusion matrixes were listed in the upper and lower part of the figure

Real-world dataset

To validate our DL model’s general applicability in China, we obtained CT images from two institutions representing epidemic and non-epidemic areas of China. Our DL diagnostic model achieved 0.811 sensitivity, 0.822 specificity, and an AUC of 0.868 (95% CI: 0.851–0.876) for COVID-19 pneumonia versus all other types of pneumonia and the accuracy of our DL model in differentiating COVID-19 from non-COVID-19 pneumonia was 81% (95% CI: 77%, 84%). These results confirmed the high performance, accuracy, and general applicability of our DL model within China in this prospective RWD cohort (Fig. 3). A comparison of the diagnostic performance between three senior experienced radiologists and the AI system is listed in Table 4. Our results indicated that our model using IDANNet could be used to distinguish COVID-19 from non-COVID-19 viral pneumonia with a non-inferior accuracy compared with that of experienced radiologists (Fig. 4).
Table 4

Performance results of the three radiologists and the AI expert system in the RWD group

Radiologist/modelNo. of casesTest performance
No.TPTNFPFNAccuracy (%)Sensitivity (%)Specificity (%)PPV (%)NPV (%)
118062116376 [70, 81] (242/316)74 [66, 82] (180/243)85 [73, 93] (62/73)94 [76, 99] (180/191)49 [40, 59] (62/126)
223116571278 [67, 89] (247/316)95 [90, 99] (231/243)22 [16, 28] (16/73)80 [67, 92] (231/288)57 [54, 60] (16/28)
31706857375 [59, 89] (238/316)70 [55, 81] (170/243)93 [89, 96] (68/73)97 [83, 99] (170/175)48 [36, 60] (68/141)
IDANNet19760134681 [77, 84] (180/243)81 [71, 91] (197/243)82 [78, 85] (60/73)94 [88, 97] (197/210)57 [50, 64] (60/106)

Numbers in brackets are 95% confidence intervals, and numbers in parentheses are numbers of cases

TP true positive, FP false positive, TN true negative, FN false negative, PPV positive predictive value, NPV negative predictive value

Fig. 4

The comparison of the diagnostic performance of RWD between three senior experienced radiologists and the AI system. The AI model operated at 81.1% sensitivity and 82.2% specificity (shown as the star) using a decision threshold set on the model development dataset. The performances of 3 experienced radiologists were labelled in dots

Performance results of the three radiologists and the AI expert system in the RWD group Numbers in brackets are 95% confidence intervals, and numbers in parentheses are numbers of cases TP true positive, FP false positive, TN true negative, FN false negative, PPV positive predictive value, NPV negative predictive value The comparison of the diagnostic performance of RWD between three senior experienced radiologists and the AI system. The AI model operated at 81.1% sensitivity and 82.2% specificity (shown as the star) using a decision threshold set on the model development dataset. The performances of 3 experienced radiologists were labelled in dots In order to show the interpretability of our model, we adopted the Grad-CAM to visualize the most important regions for making decision of the model. The attention heatmaps were fully generated by the model without additional manual annotation. Although features learned by DL models could reflect high-dimensional abstract mappings which were difficult for humans to sense but strongly associated with clinical outcomes, the attention regions were highly aligned with the ROIs acquired by human radiologists for diagnosis. Three typical cases are illustrated in Fig. 5.
Fig. 5

Three attention heatmaps from the last “pooling” layer in our DL model. The attention regions were overlapping with the ROIs acquired by human radiologists. All these cases were diagnosed as possible COVID-19 pneumonia by radiologists but correctly distinguished out by the DL model. Thus, it is desirable to investigate what exact imaging features are DL model based on and how AI acquires the classification potential to improve the CT-based identification capability of clinicians and radiologists. A typical CT image in a COVID-19 pneumonia patient is illustrated in 1a–1c with subpleural GGO and “crazy paving” sign inside the lesion. A non-typical COVID-19 image is shown in 2a–2c with total consolidation in the right inferior lobe and a non-COVID viral pneumonia case is presented in 3a–3c with typical COVID-19 CT manifestations

Three attention heatmaps from the last “pooling” layer in our DL model. The attention regions were overlapping with the ROIs acquired by human radiologists. All these cases were diagnosed as possible COVID-19 pneumonia by radiologists but correctly distinguished out by the DL model. Thus, it is desirable to investigate what exact imaging features are DL model based on and how AI acquires the classification potential to improve the CT-based identification capability of clinicians and radiologists. A typical CT image in a COVID-19 pneumonia patient is illustrated in 1a–1c with subpleural GGO and “crazy paving” sign inside the lesion. A non-typical COVID-19 image is shown in 2a–2c with total consolidation in the right inferior lobe and a non-COVID viral pneumonia case is presented in 3a–3c with typical COVID-19 CT manifestations

Discussion

After the global outbreak of COVID-19, early screening and intervention of suspected COVID-19 patients including quarantine are necessary to guarantee the in-time treatment of infected patients and ensure other medical activities [2]. Chest CT scans serve as a screening method in clinically suspected patients currently. But since the radiological manifestation of COVID-19 lacks specificity, it is hard for radiologists to distinguish COVID-19 from other types of pneumonia. Furthermore, the diagnosis of COVID-19 was quite subjective and radiological diagnosis varied according to the incidence rate in the area. It was reported that in epidemic areas, the positive predictive value (PPV) of radiologists in differentiating COVID-19 from other types of pneumonia reached 65%, which we thought was partly due to the high incidence of COVID-19 and not barely based on the diagnosis ability of the radiologists [8]. When the epidemiological characteristics change, the PPV of radiological diagnosis tends to drop dramatically, and it is doubted whether chest CT is still valuable in such a situation. Therefore, we expected to develop an AI system which could help radiologists distinguish COVID-19 from other similar types of pneumonia in an objective way. In this study, we designed a novel CNN-based DL model and the accuracy in internal validation data reached 90%. In order to diminish the risk of bias, enhance real-world clinical relevance, and improve reporting and transparency, a real-world cohort from 2 institutions in epidemic and non-epidemic areas was used to test the performance of our model. The AUC value in the RWD group was 86%, non-inferior to experienced radiologists, suggesting promising clinical usage with a higher evidence level. Methodologically, AI-based segmentation is an important step for the quantification of COVID-19 images. The segmentation would help models focus on the features in regions of interests (ROIs) selected by humans. Different from other studies, we selected the suspected CT images in reference to the prior knowledge of radiologists and fed them to our DL model directly without any manual segmentation. Basically, the selection of suspected cases by radiologists was another type of “segmentation.” These two protocols mentioned above could be summarized as “segmentation first, diagnosis later” and “selection first, diagnosis later.” Except for the reason that the latter protocol could be directly applied to our clinical practice, there are two more reasons: First, none of the quantified parameters extracted from segmented regions was proved to be useful to disease diagnosis yet and most of them could not be clearly explained. Second, a robust segmentation network required a large number of ROIs for training and highly relied on the accuracy of ROIs drawn by humans, which was very costly and time-consuming while Dr. Zhang and his team have done a good job in this area [20]. After analyzing the prior selected images, our DL model could output diagnostic suggestions. Our testing result from RWD was non-inferior to the one from Zhang’s study and the sample size we used to train our model was much smaller. It would be interesting to compare the diagnostic efficiency between DL models trained by these 2 training protocols respectively in further studies. In order to explain how our model worked, the important regions recognized by our model automatically were visualized by attention heatmaps. It could be observed that the suspicious pulmonary areas detected by our model were highly overlapping with the actual infected area recognized by radiologists. Some radiological features such as GGOs and crazy paving signs, which were reported to be crucial for COVID-19 diagnosis, were also included in the highlighted area labeled by the DL model, indicating that the high-dimensional features excavated by the DL model may reflect some radiological characteristics perceived by radiologists and make the quantification of these features possible. Based on the prior evaluation of radiologists, our new DL model has the potential to be added to the clinical routine directly. When one suspected case was detected, radiologists could send the images directly to the DL model and obtain a diagnostic suggestion with an accuracy of over 80% which was convenient and feasible. Despite the good performance of our novel DL system, there are still several limitations. Firstly, we used RT-PCR results as the golden standard which was challenged frequently by its low positive rate. The sensitivity of chest CT to COVID-19 might be overestimated while the specificity would be underestimated. Secondly, the prognostic events, such as death or deterioration, were not taken into consideration in our study. Thirdly, we have not enrolled special population such as children and pregnant women. Our established DL model was able to achieve accurate identification of COVID-19 from other suspected ones in the real-world situation on chest CT using prospective validation, which could aid in improving the clinical decision-making process. Future studies could be carried out to investigate a complete set of standard AI-based workflows for this global disaster from development to verification to integrate limited data resources and iterate existing AI products. (DOCX 180 kb)
  23 in total

1.  Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

Authors:  Varun Gulshan; Lily Peng; Marc Coram; Martin C Stumpe; Derek Wu; Arunachalam Narayanaswamy; Subhashini Venugopalan; Kasumi Widner; Tom Madams; Jorge Cuadros; Ramasamy Kim; Rajiv Raman; Philip C Nelson; Jessica L Mega; Dale R Webster
Journal:  JAMA       Date:  2016-12-13       Impact factor: 56.272

2.  Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases.

Authors:  Tao Ai; Zhenlu Yang; Hongyan Hou; Chenao Zhan; Chong Chen; Wenzhi Lv; Qian Tao; Ziyong Sun; Liming Xia
Journal:  Radiology       Date:  2020-02-26       Impact factor: 11.105

3.  Updated understanding of the outbreak of 2019 novel coronavirus (2019-nCoV) in Wuhan, China.

Authors:  Weier Wang; Jianming Tang; Fangqiang Wei
Journal:  J Med Virol       Date:  2020-02-12       Impact factor: 2.327

4.  Clinical Characteristics of Coronavirus Disease 2019 in China.

Authors:  Wei-Jie Guan; Zheng-Yi Ni; Yu Hu; Wen-Hua Liang; Chun-Quan Ou; Jian-Xing He; Lei Liu; Hong Shan; Chun-Liang Lei; David S C Hui; Bin Du; Lan-Juan Li; Guang Zeng; Kwok-Yung Yuen; Ru-Chong Chen; Chun-Li Tang; Tao Wang; Ping-Yan Chen; Jie Xiang; Shi-Yue Li; Jin-Lin Wang; Zi-Jing Liang; Yi-Xiang Peng; Li Wei; Yong Liu; Ya-Hua Hu; Peng Peng; Jian-Ming Wang; Ji-Yang Liu; Zhong Chen; Gang Li; Zhi-Jian Zheng; Shao-Qin Qiu; Jie Luo; Chang-Jiang Ye; Shao-Yong Zhu; Nan-Shan Zhong
Journal:  N Engl J Med       Date:  2020-02-28       Impact factor: 91.245

5.  Efficient prediction of drug-drug interaction using deep learning models.

Authors:  Prashant Kumar Shukla; Piyush Kumar Shukla; Poonam Sharma; Paresh Rawat; Jashwant Samar; Rahul Moriwal; Manjit Kaur
Journal:  IET Syst Biol       Date:  2020-08       Impact factor: 1.615

Review 6.  Insight into 2019 novel coronavirus - An updated interim review and lessons from SARS-CoV and MERS-CoV.

Authors:  Mingxuan Xie; Qiong Chen
Journal:  Int J Infect Dis       Date:  2020-04-01       Impact factor: 3.623

7.  Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography.

Authors:  Kang Zhang; Xiaohong Liu; Jun Shen; Zhihuan Li; Ye Sang; Xingwang Wu; Yunfei Zha; Wenhua Liang; Chengdi Wang; Ke Wang; Linsen Ye; Ming Gao; Zhongguo Zhou; Liang Li; Jin Wang; Zehong Yang; Huimin Cai; Jie Xu; Lei Yang; Wenjia Cai; Wenqin Xu; Shaoxu Wu; Wei Zhang; Shanping Jiang; Lianghong Zheng; Xuan Zhang; Li Wang; Liu Lu; Jiaming Li; Haiping Yin; Winston Wang; Oulan Li; Charlotte Zhang; Liang Liang; Tao Wu; Ruiyun Deng; Kang Wei; Yong Zhou; Ting Chen; Johnson Yiu-Nam Lau; Manson Fok; Jianxing He; Tianxin Lin; Weimin Li; Guangyu Wang
Journal:  Cell       Date:  2020-05-04       Impact factor: 41.582

Review 8.  Emerging coronaviruses: Genome structure, replication, and pathogenesis.

Authors:  Yu Chen; Qianyun Liu; Deyin Guo
Journal:  J Med Virol       Date:  2020-02-07       Impact factor: 2.327

9.  Classification of COVID-19 patients from chest CT images using multi-objective differential evolution-based convolutional neural networks.

Authors:  Dilbag Singh; Vijay Kumar; Manjit Kaur
Journal:  Eur J Clin Microbiol Infect Dis       Date:  2020-04-27       Impact factor: 3.267

10.  Detecting Respiratory Pathologies Using Convolutional Neural Networks and Variational Autoencoders for Unbalancing Data.

Authors:  María Teresa García-Ordás; José Alberto Benítez-Andrades; Isaías García-Rodríguez; Carmen Benavides; Héctor Alaiz-Moretón
Journal:  Sensors (Basel)       Date:  2020-02-22       Impact factor: 3.576

View more
  4 in total

1.  Ultrafast pulse wave velocity and ensemble learning to predict atherosclerosis risk.

Authors:  Xue Bai; Wenjun Liu; Hui Huang; Huan You
Journal:  Int J Cardiovasc Imaging       Date:  2022-02-27       Impact factor: 2.357

2.  External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review.

Authors:  Alice C Yu; Bahram Mohajer; John Eng
Journal:  Radiol Artif Intell       Date:  2022-05-04

Review 3.  Diagnostic imaging in COVID-19 pneumonia: a literature review.

Authors:  Sarah Campagnano; Flavia Angelini; Giovanni Battista Fonsi; Simone Novelli; Francesco Maria Drudi
Journal:  J Ultrasound       Date:  2021-02-15

4.  Emergency room comprehensive assessment of demographic, radiological, laboratory and clinical data of patients with COVID-19: determination of its prognostic value for in-hospital mortality.

Authors:  Marco Gatti; Marco Calandri; Andrea Biondo; Carlotta Geninatti; Clara Piatti; Irene Ruggirello; Ambra Santonocito; Sara Varello; Laura Bergamasco; Paolo Bironzo; Adriana Boccuzzi; Luca Brazzi; Pietro Caironi; Luciano Cardinale; Rossana Cavallo; Franco Riccardini; Giorgio Limerutti; Andrea Veltri; Paolo Fonio; Riccardo Faletti
Journal:  Intern Emerg Med       Date:  2021-03-08       Impact factor: 3.397

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.