Literature DB >> 33230503

COVID-19 pneumonia accurately detected on chest radiographs with artificial intelligence.

Francisco Dorr1, Hernán Chaves1,2, María Mercedes Serra1,2, Andrés Ramirez1, Martín Elías Costa1, Joaquín Seia1, Claudia Cejas2, Marcelo Castro3, Eduardo Eyheremendy4, Diego Fernández Slezak1,5,6, Mauricio F Farez1,7.   

Abstract

PURPOSE: To investigate the diagnostic performance of an Artificial Intelligence (AI) system for detection of COVID-19 in chest radiographs (CXR), and compare results to those of physicians working alone, or with AI support.
MATERIALS AND METHODS: An AI system was fine-tuned to discriminate confirmed COVID-19 pneumonia, from other viral and bacterial pneumonia and non-pneumonia patients and used to review 302 CXR images from adult patients retrospectively sourced from nine different databases. Fifty-four physicians blind to diagnosis, were invited to interpret images under identical conditions in a test set, and randomly assigned either to receive or not receive support from the AI system. Comparisons were then made between diagnostic performance of physicians working with and without AI support. AI system performance was evaluated using the area under the receiver operating characteristic (AUROC), and sensitivity and specificity of physician performance compared to that of the AI system.
RESULTS: Discrimination by the AI system of COVID-19 pneumonia showed an AUROC curve of 0.96 in the validation and 0.83 in the external test set, respectively. The AI system outperformed physicians in the AUROC overall (70% increase in sensitivity and 1% increase in specificity, p < 0.0001). When working with AI support, physicians increased their diagnostic sensitivity from 47% to 61% (p < 0.001), although specificity decreased from 79% to 75% (p = 0.007).
CONCLUSIONS: Our results suggest interpreting chest radiographs (CXR) supported by AI, increases physician diagnostic sensitivity for COVID-19 detection. This approach involving a human-machine partnership may help expedite triaging efforts and improve resource allocation in the current crisis.
© 2020 Elsevier B.V.

Entities:  

Keywords:  AI, artificial intelligence; AUPR, area under the precision-recall; AUROC, area under the receiver operating characteristic; Artificial intelligence; COVID-19; CT, computed tomography; CXR, chest radiographs; Chest; DL, deep learning; Diagnostic performance; RT-PCR, real-time reverse transcriptase–polymerase chain reaction; Radiography

Year:  2020        PMID: 33230503      PMCID: PMC7674009          DOI: 10.1016/j.ibmed.2020.100014

Source DB:  PubMed          Journal:  Intell Based Med        ISSN: 2666-5212


Introduction

Starting on December 8, 2019, a series of viral pneumonia cases of unknown etiology emerged in Wuhan, Hubei province, China [[1], [2], [3]]. Sequencing analysis from respiratory tract samples identified a novel coronavirus, tentatively named 2019-nCoV by the World Health Organization and subsequently designated as SARS-CoV-2 by the International Committee on Taxonomy of Viruses [4]. During the first two months of 2020, the virus causing the disease known as COVID-19 spread worldwide, showing evidence of human-to-human transmission between close contacts [5]. The World Health Organization declared the coronavirus outbreak a pandemic on March 11, and countries around the world struggled with an unprecedented surge in confirmed cases [6]. SARS-CoV-2 causes varying degrees of illness, the most common symptoms of which include fever and cough. However, acute respiratory distress syndrome may develop in a subset of patients, requiring their admission to intensive care and mechanical ventilation support, some of whom may die from multiple organ failure [7,8]. Current COVID-19 guidelines rely heavily on clinical, laboratory and imaging findings to triage patients [[9], [10], [11], [12]]. The World Health Organization interim guidance for laboratory testing has recommended use of nucleic acid amplification tests such as real-time reverse transcriptase–polymerase chain reaction (RT-PCR) for COVID-19 diagnosis in suspected cases [13]. However, due to overwhelming levels of demand, RT-PCR kit shortages have been widely reported [14,15]. Also, RT-PCR from nasopharyngeal and oropharyngeal swabs (the most common respiratory tract sampling sites) obtained within the first 14 days of illness onset, show varying sensitivity rates ranging between 29.6 and 73.3% and take several hours to process [16]. Although chest radiographs (CXR) and computed tomography (CT) are key imaging tools for pulmonary disease diagnosis, their role in the management of COVID-19 has not been clearly defined. Formal statements have been issued by both a multinational consensus from the Fleischner Society, proposing CXR as surrogate to RT-PCR in resource constrained environments [12], and by the American College of Radiology which recently recommended avoiding chest CT as a first-line test for COVID-19 diagnosis, endorsing use of portable CXR instead, in specific cases [17]. Artificial intelligence (AI) has proven useful for CXR analysis in numerous clinical settings [[18], [19], [20], [21], [22]], including preliminary work on COVID-19 [[23], [24], [25], [26]]. However, performance of these algorithms and their impact on clinical practice has not been thoroughly evaluated. Thus, we aimed to investigate the diagnostic performance of a fine-tuned AI system for detection of COVID-19 using DenseNet 121 architecture and compare results to those of radiologists and emergency care physicians working with or without AI support.

Material and methods

Dataset construction

For training and validation, a total of 302 CXR images from adult patients were randomly sourced from nine different databases, eight of them public and published online, and one from a local institution (patient age range:17–90 years; gender: 97 female, 156 male, 49 not available). CXR images collected conformed three distinct groups, those corresponding to COVID-19 pneumonia (n = 102) diagnosis, a second set of non-COVID-19 pneumonia (n = 100) cases, and a third group including normal CXR images and other non-pneumonia findings (n = 100). For inclusion to the COVID-19 group, prior confirmatory RT-PCR (retrospective study) was required. The final database was curated by a radiologist who reviewed every CXR for quality eligibility criteria (i.e.: adequate exposure and no major artifacts). In cases for which age data was not available (n = 51/302, see appended database) CXR images were double-checked for complete ossification. An independent test set including 60 CXR (age range: 20–80 years; gender: 29 female, 25 male, 6 not available), equally distributed among the three groups, was put together and curated using similar criteria.

Training and validation of the AI system

We based our COVID-19 CXR detection model on a pre-existing deep learning (DL) CXR model, previously trained for the CheXpert competition, and applied in a wide range of pathologies including pneumonia, pleural effusion, pneumothorax, and cardiomegaly, among others [27]. The model was trained using DenseNet 121 architecture [28], in which final outputs (i.e., labels) were assigned by the last fully connected layer, with one neuron for each label resulting in a multi-label prediction. To perform transfer learning, we replaced the last layer with another fully connected layer with 3 possible outputs: 1) COVID-19 pneumonia, 2) non-COVID-19 pneumonia and, 3) normal CXR or other non-pneumonia findings. We kept the model loss function (Binary Cross Entropy) and final activation function (Sigmoid) the same as the original model trained for CheXpert, being by this means, still a multi-label problem. To train this new model, parameter weights of every layer were frozen, except for the last block of layers composed of a dense layer, a dropout layer and the new output layer, which remained unfrozen for 20 epochs (Fig. 1 ).
Fig. 1

Convolutional Neural Network Diagram. This chart summarizes the strategy used in the study. Using a convolutional neural network, pre-trained with a dataset of over 200,000 CXRs and 5 output classes; all layers but the last block of layers were frozen and transferred onto a new network with new labels (COVID-19 pneumonia, Other pneumonias, Normal/Other findings). Final fully-connected layers were then retrained over the transferred ones.

Convolutional Neural Network Diagram. This chart summarizes the strategy used in the study. Using a convolutional neural network, pre-trained with a dataset of over 200,000 CXRs and 5 output classes; all layers but the last block of layers were frozen and transferred onto a new network with new labels (COVID-19 pneumonia, Other pneumonias, Normal/Other findings). Final fully-connected layers were then retrained over the transferred ones. To exploit the limited number of COVID-19 cases, we used the whole training set and applied to it a 5-fold cross-validation method, splitting 80% of the dataset for training and 20% for internal validation on each fold. We calculated the area under the receiver operating characteristic (AUROC) curves in the three groups for each fold. Once the training was done for each fold, we selected the epoch that had the best metric average among all the cross-validation folds (epoch 20) and retrained the algorithm with those best parameters using the whole training set. The performance of the algorithm was then validated using a completely independent test set [60]. We evaluated the performance of the algorithm on this dataset using sensitivity and specificity, as well as AUROC curve measures. Given that model output was multilabel, we selected the output class with the highest probability to convert it to a multiclass problem and calculate the metrics. For example, if the multilabel sigmoid output prediction was (0.2, 0.6, 0.9) we took the maximum probability (0.9) and returned the vector (0, 0, 1). We found that by doing this, instead of retraining the model explicitly with a multi-class loss and a softmax output, the performance was better and avoided a bias to label almost everything as COVID positive.

Clinical performance study design

To evaluate diagnostic performance of physicians interpreting CXRs, with and without support of the DL-model, we conducted an online survey. Physicians (radiologists [n = 23] and emergency care physicians [n = 31]) had to decide whether CXR findings were compatible with COVID-19 pneumonia, non-COVID-19 pneumonia or neither. Sixty cases in total (i.e., the entire test set: 20 COVID-19 pneumonia, 20 non-COVID-19 pneumonia and 20 non-pneumonia CXRs) were shown to each survey responder. An AI prediction was shown in randomized fashion to half the cases in each subset. Physicians had a maximum of 20 minutes to complete the survey. A full set of answers is available online.1

Statistical analysis

To evaluate AI system performance, AUROC was estimated using the normalized Mann-Whitney U statistic. We then compared sensitivity and specificity of physicians, to the optimal cutoff point of the AI system. To establish the effect of AI support on physician performance, we constructed a mixed model with a repeated-measures design, including presence or absence of AI support, seniority level (junior vs senior, based years since specialty degree, under or over 5 years), type of specialty (radiologists vs other specialists); with interactions as independent variables and sensitivity and specificity as dependent variables (Supplementary Table). Statistical analyses were conducted using Python scikit-learn library and Stata version 12.1. Unless noted, mean ± standard deviation is reported. Two-tailed P values < 0.05 were considered statistically significant.

Code availability

Because the DL system source code used for this analysis contains proprietary information, it cannot be made fully available for public release. However, non-privative code parts have been released in a public repository that can be found in https://bitbucket.org/aenti/entelai-covid-paper. All study experiments and implementation methods are described in detail and the tool itself is available online at: https://covid.entelai.com, to enable independent replication.

Data availability

Local datasets and links to image repositories used in the study are publicly available online.2

Results

We fine-tuned a pre-established AI system using a dataset of 302 CXR of COVID-19, other pneumonia, and other non-pneumonia cases. After 20 epochs of training, we obtained a mean AUROC curve among the 5 cross-validation folds of 0.96 ± 0.02 (see Fig. 2 and Table 1 ).
Fig. 2

Performance of the Artificial Intelligence (AI) System in COVID-19 Prediction. Receiver operating characteristic (ROC) curve and area under the curve (AUC) of the AI system on the validation set for each of the 5 folds, with a mean area under the receiver operating characteristic (AUROC) curve of 0.96 ± 0.02, n = 302).

Table 1

Performance of the AI system in the training dataset using the average of 5-fold cross-validation.

DiagnosisSensitivitySpecificityAUROC
Covid-19 pneumonia (n = 102)94%81%0.96
Non-Covid-19 pneumonia (n = 100)55%95%0.87
Other (n = 100)84%91%0.93
Performance of the Artificial Intelligence (AI) System in COVID-19 Prediction. Receiver operating characteristic (ROC) curve and area under the curve (AUC) of the AI system on the validation set for each of the 5 folds, with a mean area under the receiver operating characteristic (AUROC) curve of 0.96 ± 0.02, n = 302). Performance of the AI system in the training dataset using the average of 5-fold cross-validation. One of the traditional criticisms of DL models is the risk of "black box" predictions, implying the information that the model uses to make predictions is unclear and may not be meaningful. Recently, activation maps have been developed as a way to depict what the models are using to support their predictions [29]. We analyzed activation maps for COVID-19 and compared them to other pneumonias, to validate the model and identify potential sources of information. The activation maps were obtained by taking the output of the average pooling layer and taking the mean across channel dimension [30]. As shown in Fig. 3 , activation maps generated using this AI system relied heavily on lower pulmonary lobes, and on peripheral lung regions in particular. Of note, peripheral infection patterns have recently been described as a key feature in COVID-19 [8,31], suggesting the AI system was able to predict COVID-19 diagnosis using relevant information from CXRs.
Fig. 3

Activation Maps of the Artificial Intelligence (AI) System. a) Example of a single activation map on a CXR image from the COVID-19 group. b) Mean activation map of Non-COVID-19 pneumonia category. c) Mean activation map of COVID-19 pneumonia category. d) Delta activation map between COVID-19 and Non-COVID-19 pneumonia categories calculated by , 0) for each pixel (i,j), depicting lower and peripheral areas as more relevant for the differentiation.

Activation Maps of the Artificial Intelligence (AI) System. a) Example of a single activation map on a CXR image from the COVID-19 group. b) Mean activation map of Non-COVID-19 pneumonia category. c) Mean activation map of COVID-19 pneumonia category. d) Delta activation map between COVID-19 and Non-COVID-19 pneumonia categories calculated by , 0) for each pixel (i,j), depicting lower and peripheral areas as more relevant for the differentiation. Since training can overfit prediction to a particular dataset, we generated an independent test set comprising 60 images (20 per category) to evaluate AI system performance. AUROC, Brier and Mean Absolute Error scores were obtained on a one-vs-rest basis. Brier scores in particular are widely used in medical research to assess and compare model prediction accuracy [32]. Values range from 0 to 1, with 0 being the best possible outcome. Although they can be used as a single multiclass score, in this study we reported Brier scores by class, to obtain a better idea of how well the model performed for each one. As shown in Table 2 and Fig. 4 , performance of the model was not as good, but nevertheless acceptable, since this AI system was able to predict COVID-19 with a sensitivity and specificity of 80% and an AUROC of 0.84. This difference between the cross-validation and the test results could be explained by the data sets used. Since the number of instances of each dataset is low, it is almost impossible to obtain a perfect generalization. The model could be learning certain particularities of the training set that, in spite of doing a cross-validation and having regularization by dropout, the overfitting to the specific dataset could not be completely overcome. More data will be needed to achieve a similar score between the cross-validation and the test set.
Table 2

Performance of the AI system in the test dataset.

DiagnosisSensitivitySpecificityAUROCF1 scoreBrier scoreMAE
Covid-19 pneumonia (n = 20)80%80%0.840.730.160.28
Non-Covid-19 pneumonia (n = 20)60%90%0.880.670.140.26
Other (n = 20)65%83%0.860.650.150.26

AI: artificial intelligence, AUROC: area under the receiver operating characteristics, MAE: mean absolute error.

Fig. 4

Performance of the Artificial Intelligence (AI) System on the Train and Test Sets, Compared to the Performance of Physicians in COVID-19 Prediction. Receiver operating characteristic (ROC) curve and area under the curve (AUC) of the AI system on the train and test sets. Physician performance with and without AI support is compared.

Performance of the AI system in the test dataset. AI: artificial intelligence, AUROC: area under the receiver operating characteristics, MAE: mean absolute error. Performance of the Artificial Intelligence (AI) System on the Train and Test Sets, Compared to the Performance of Physicians in COVID-19 Prediction. Receiver operating characteristic (ROC) curve and area under the curve (AUC) of the AI system on the train and test sets. Physician performance with and without AI support is compared.

Clinical performance results

We next analyzed whether identification and separation of COVID-19 by physicians was adequate, given the novelty of the disease and the lack of worldwide experience. To this end, we tested the performance of 60 physicians from several different referral centers in South America. Six physicians were excluded for not completing the survey in time [n = 4], or not answering a minimum number of questions [n = 2]). Fifty-four physicians from Argentina [n = 49], Chile [n = 4] and Colombia [n = 1] were included. Given the good performance of the model, we randomly informed physicians what the AI system prediction had been for 50% of the images (which could be correct or incorrect as per its performance on the same Test Set). AI system prediction was shared with physicians as a likelihood percentage for each condition. Physicians would then have to give the most likely diagnosis, given the AI suggestion. As shown in Fig. 4, sensitivity and specificity for COVID-19 prediction based on CXR by physicians was 47% and 79% respectively, with an increase in sensitivity to 61% (p < 0.001) and a decrease in specificity to 74% (p = 0.007) when using AI support. No significant differences between radiologists and emergency care physicians were observed, nor did years of training affect overall performance results (data not shown).

Discussion

In the setting of the COVID-19 pandemic, it is probable that RT-PCR tests will become more robust, quicker, and ubiquitous. However, due to the actual shortage and limitations of RT-PCR kits, diagnostic imaging modalities such as CXR and CT have been proposed as surrogate methods for COVID-19 triage. Some researchers have even reported chest CTs as showing higher sensitivity for COVID-19 detection than RT-PCR from swab samples [33,34]. Mei et al. went further and used AI to integrate chest CT findings with clinical symptoms, exposure history and laboratory testing achieving an AUROC of 0.92 and had equal sensitivity as compared to a senior thoracic radiologist [35]. However, the American College of Radiology currently recommends CT be reserved for hospitalized, symptomatic patients with specific clinical indications [17]. CT also increases exposure to radiation, is less cost-effective, not widely available and requires appropriate infection control procedures during and after examination, including closing scanning rooms for up to 1 h for airborne precaution measures [36]. This is why CXR (the most commonly performed diagnostic imaging examination) has been proposed as first-line imaging study when COVID-19 is suspected, especially in resource-constrained scenarios [11,12]. Portable X-ray units are particularly suitable, as they can be moved to the emergency department (ED) or intensive care unit and easily cleaned afterwards [17]. Most clinicians have less experience interpreting CXRs than radiologists. In the ED setting however, physicians with no formal radiology training are the ones most often reporting CXR findings. Gatt el al., found sensitivity levels as low as 20% for CXR evaluation results by emergency care physicians [37]. One would expect this sensitivity to be even lower in the setting of a new disease like COVID-19. At the other end of the spectrum, Wong et al. found thoracic radiologist sensitivity level for CXR diagnosis in a cohort of COVID-19 patients was 69% at baseline [38], and Cozzi et al. found sensitivities as high as 100% in experienced radiologists [39]. In our study we noted a low sensibility (both in radiologist and emergency care physicians) for the diagnosis of COVID-19 pneumonia. This could be explained by the fact that, at the time of the clinical study, most physicians that participated in the survey, have been exposed to few COVID-19 cases. Low sensibility could also be related to the online survey design, as physicians evaluated CXR in a different fashion as they do in their clinical practice, with a limited amount of time to give a diagnosis. We also noted decreased specificity, due to increased numbers of false positives in the AI-supported group. In every case, false positives arose from doubts over the “Other Pneumonias” category; although the AI model correctly predicted and presented the label “Other Pneumonias”, physicians were still inclined to favor a COVID-19 diagnosis. The significance and clinical impact of this effect is unclear and deserves further evaluation. AI has proven useful in CXR analysis for many diseases [[18], [19], [20], [21], [22]]. In the setting of COVID-19 emergence, several AI models based on DL have been developed around the world, with varying results in terms of accuracy detecting COVID-19-infected patients based on CXR [[23], [24], [25], [26]]. Moreover, none of these models have been tested in real or simulated clinical scenarios. Murphy et al. developed an AI system for the evaluation of CXR in the setting of COVID-19 and achieved a lower AUROC (0.81), and their test set came from a single institution [40]. They compared the performance of the AI system to radiologist performance but did not evaluate the change in diagnostic accuracy of radiologist without and with AI support as we did. Considering the prevalence of adults in the COVID-19 group, we chose to exclude pediatric databases to avoid major bias in training and testing. Early diagnosis, isolation and prompt clinical management are the three public health strategies collectively contributing to contain the spread of COVID-19. AI models building on the first of these premises might be significant [41]. In this study, we designed and evaluated a DL model trained to detect COVID-19 on CXR images. On an independent test dataset, the model showed 80% sensitivity and specificity for COVID-19 detection with an AUROC value of 0.84. We also observed improved diagnostic sensitivity in physician performance (both for radiologists and emergency care physicians) and decreased specificity. Of note, despite AI system support, physicians did not reach or surpass AI metrics. Our results differ from the work of Patel et al. who tested a model in a simulated clinical scenario applied to CXR pneumonia diagnosis and achieved maximum diagnostic accuracy combining radiologist and AI performance [42]. This could have been due to lack of formal training to incorporate AI recommendations, or lack of trust in our model predictions. Both hypotheses should be further validated in future studies. Our model has significant limitations. First, despite the large number of CXR used to train the original model (around 224,000 images), only a small number of CXRs were added to our DL model (around 100 images per category) using a transfer learning approach. Second, our training set is mostly based on adult patients CXRs from China and Italy. Third, our model could also be prone to selection bias, as databases ten to include more severe or complicated cases. Since the disease has emerged recently, few good quality, curated, COVID-19 CXR databases are available. Inclusion of cases of all ages, from every region around the world, would certainly improve AI systems diagnostic accuracy and reliability.

Conclusions

In conclusion, our data suggests physician performance can be improved using AI systems such as the one described here. We showed an increase in sensitivity from 47% to 61% for COVID-19 prediction based on CXR. Future prospective studies are needed to further evaluate the clinical and public health impact of the combined work of physicians and AI systems.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Mauricio F. Farez has received professional travel/accommodations stipends from Merck-Serono Argentina, Teva Argentina and Novartis Argentina. The rest of the authors declare no competing interests.
  27 in total

1.  Chest radiographs in the emergency department: is the radiologist really necessary?

Authors:  M E Gatt; G Spectre; O Paltiel; N Hiller; R Stalnikowicz
Journal:  Postgrad Med J       Date:  2003-04       Impact factor: 2.401

2.  Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning.

Authors:  Daniel S Kermany; Michael Goldbaum; Wenjia Cai; Carolina C S Valentim; Huiying Liang; Sally L Baxter; Alex McKeown; Ge Yang; Xiaokang Wu; Fangbing Yan; Justin Dong; Made K Prasadha; Jacqueline Pei; Magdalene Y L Ting; Jie Zhu; Christina Li; Sierra Hewett; Jason Dong; Ian Ziyar; Alexander Shi; Runze Zhang; Lianghong Zheng; Rui Hou; William Shi; Xin Fu; Yaou Duan; Viet A N Huu; Cindy Wen; Edward D Zhang; Charlotte L Zhang; Oulan Li; Xiaobo Wang; Michael A Singer; Xiaodong Sun; Jie Xu; Ali Tafreshi; M Anthony Lewis; Huimin Xia; Kang Zhang
Journal:  Cell       Date:  2018-02-22       Impact factor: 41.582

3.  Correlation of Chest CT and RT-PCR Testing for Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014 Cases.

Authors:  Tao Ai; Zhenlu Yang; Hongyan Hou; Chenao Zhan; Chong Chen; Wenzhi Lv; Qian Tao; Ziyong Sun; Liming Xia
Journal:  Radiology       Date:  2020-02-26       Impact factor: 11.105

4.  Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China.

Authors:  Chaolin Huang; Yeming Wang; Xingwang Li; Lili Ren; Jianping Zhao; Yi Hu; Li Zhang; Guohui Fan; Jiuyang Xu; Xiaoying Gu; Zhenshun Cheng; Ting Yu; Jiaan Xia; Yuan Wei; Wenjuan Wu; Xuelei Xie; Wen Yin; Hui Li; Min Liu; Yan Xiao; Hong Gao; Li Guo; Jungang Xie; Guangfa Wang; Rongmeng Jiang; Zhancheng Gao; Qi Jin; Jianwei Wang; Bin Cao
Journal:  Lancet       Date:  2020-01-24       Impact factor: 79.321

5.  Clinical Characteristics of Coronavirus Disease 2019 in China.

Authors:  Wei-Jie Guan; Zheng-Yi Ni; Yu Hu; Wen-Hua Liang; Chun-Quan Ou; Jian-Xing He; Lei Liu; Hong Shan; Chun-Liang Lei; David S C Hui; Bin Du; Lan-Juan Li; Guang Zeng; Kwok-Yung Yuen; Ru-Chong Chen; Chun-Li Tang; Tao Wang; Ping-Yan Chen; Jie Xiang; Shi-Yue Li; Jin-Lin Wang; Zi-Jing Liang; Yi-Xiang Peng; Li Wei; Yong Liu; Ya-Hua Hu; Peng Peng; Jian-Ming Wang; Ji-Yang Liu; Zhong Chen; Gang Li; Zhi-Jian Zheng; Shao-Qin Qiu; Jie Luo; Chang-Jiang Ye; Shao-Yong Zhu; Nan-Shan Zhong
Journal:  N Engl J Med       Date:  2020-02-28       Impact factor: 91.245

6.  Frequency and Distribution of Chest Radiographic Findings in Patients Positive for COVID-19.

Authors:  Ho Yuen Frank Wong; Hiu Yin Sonia Lam; Ambrose Ho-Tung Fong; Siu Ting Leung; Thomas Wing-Yan Chin; Christine Shing Yen Lo; Macy Mei-Sze Lui; Jonan Chun Yin Lee; Keith Wan-Hang Chiu; Tom Wai-Hin Chung; Elaine Yuen Phin Lee; Eric Yuk Fai Wan; Ivan Fan Ngai Hung; Tina Poy Wing Lam; Michael D Kuo; Ming-Yen Ng
Journal:  Radiology       Date:  2020-03-27       Impact factor: 11.105

7.  Policies and Guidelines for COVID-19 Preparedness: Experiences from the University of Washington.

Authors:  Mahmud Mossa-Basha; Jonathan Medverd; Ken F Linnau; John B Lynch; Mark H Wener; Gregory Kicska; Thomas Staiger; Dushyant V Sahani
Journal:  Radiology       Date:  2020-04-08       Impact factor: 11.105

8.  Outbreak of pneumonia of unknown etiology in Wuhan, China: The mystery and the miracle.

Authors:  Hongzhou Lu; Charles W Stratton; Yi-Wei Tang
Journal:  J Med Virol       Date:  2020-02-12       Impact factor: 2.327

9.  CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images.

Authors:  Asif Iqbal Khan; Junaid Latief Shah; Mohammad Mudasir Bhat
Journal:  Comput Methods Programs Biomed       Date:  2020-06-05       Impact factor: 5.428

10.  Human-machine partnership with artificial intelligence for chest radiograph diagnosis.

Authors:  Bhavik N Patel; Louis Rosenberg; Gregg Willcox; David Baltaxe; Mimi Lyons; Jeremy Irvin; Pranav Rajpurkar; Timothy Amrhein; Rajan Gupta; Safwan Halabi; Curtis Langlotz; Edward Lo; Joseph Mammarappallil; A J Mariano; Geoffrey Riley; Jayne Seekins; Luyao Shen; Evan Zucker; Matthew Lungren
Journal:  NPJ Digit Med       Date:  2019-11-18
View more
  7 in total

1.  Detecting Covid19 and pneumonia from chest X-ray images using deep convolutional neural networks.

Authors:  Nallamothu Sri Kavya; Thotapalli Shilpa; N Veeranjaneyulu; D Divya Priya
Journal:  Mater Today Proc       Date:  2022-05-19

2.  X-Ray Equipped with Artificial Intelligence: Changing the COVID-19 Diagnostic Paradigm during the Pandemic.

Authors:  Mustafa Ghaderzadeh; Mehrad Aria; Farkhondeh Asadi
Journal:  Biomed Res Int       Date:  2021-08-22       Impact factor: 3.411

Review 3.  Role of Artificial Intelligence in COVID-19 Detection.

Authors:  Anjan Gudigar; U Raghavendra; Sneha Nayak; Chui Ping Ooi; Wai Yee Chan; Mokshagna Rohit Gangavarapu; Chinmay Dharmik; Jyothi Samanth; Nahrizul Adib Kadri; Khairunnisa Hasikin; Prabal Datta Barua; Subrata Chakraborty; Edward J Ciaccio; U Rajendra Acharya
Journal:  Sensors (Basel)       Date:  2021-12-01       Impact factor: 3.576

4.  Artificial Intelligence-Based Detection of Pneumonia in Chest Radiographs.

Authors:  Judith Becker; Josua A Decker; Christoph Römmele; Maria Kahn; Helmut Messmann; Markus Wehler; Florian Schwarz; Thomas Kroencke; Christian Scheurig-Muenkler
Journal:  Diagnostics (Basel)       Date:  2022-06-14

Review 5.  The role of artificial intelligence in plain chest radiographs interpretation during the Covid-19 pandemic.

Authors:  Dana AlNuaimi; Reem AlKetbi
Journal:  BJR Open       Date:  2022-05-26

Review 6.  Highlighting COVID-19: What the imaging exams show about the disease.

Authors:  Lorena Sousa de Carvalho; Ronaldo Teixeira da Silva Júnior; Bruna Vieira Silva Oliveira; Yasmin Silva de Miranda; Nara Lúcia Fonseca Rebouças; Matheus Sande Loureiro; Samuel Luca Rocha Pinheiro; Regiane Santos da Silva; Paulo Victor Silva Lima Medrado Correia; Maria José Souza Silva; Sabrina Neves Ribeiro; Filipe Antônio França da Silva; Breno Bittencourt de Brito; Maria Luísa Cordeiro Santos; Rafael Augusto Oliveira Sodré Leal; Márcio Vasconcelos Oliveira; Fabrício Freire de Melo
Journal:  World J Radiol       Date:  2021-05-28

Review 7.  The Added Effect of Artificial Intelligence on Physicians' Performance in Detecting Thoracic Pathologies on CT and Chest X-ray: A Systematic Review.

Authors:  Dana Li; Lea Marie Pehrson; Carsten Ammitzbøl Lauridsen; Lea Tøttrup; Marco Fraccaro; Desmond Elliott; Hubert Dariusz Zając; Sune Darkner; Jonathan Frederik Carlsen; Michael Bachmann Nielsen
Journal:  Diagnostics (Basel)       Date:  2021-11-26
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.