| Literature DB >> 33190102 |
D Javor1, H Kaplan2, A Kaplan2, S B Puchner3, C Krestan4, P Baltzer1.
Abstract
INTRODUCTION: Computed Tomography is an essential diagnostic tool in the management of COVID-19. Considering the large amount of examinations in high case-load scenarios, an automated tool could facilitate and save critical time in the diagnosis and risk stratification of the disease.Entities:
Keywords: Computed tomography; Coronavirus disease 2019; Deep learning
Year: 2020 PMID: 33190102 PMCID: PMC7641539 DOI: 10.1016/j.ejrad.2020.109402
Source DB: PubMed Journal: Eur J Radiol ISSN: 0720-048X Impact factor: 3.528
Distribution of COVID-19 Cases and non-COVID-19 Diseases.
| Training and Validation Dataset (n patients) | Disease | Test Dataset (n patients) |
|---|---|---|
| 165 | COVID-19 | 45 |
| 12 (7 %) | Bacterial Pneumonia | 8 (18 %) |
| 4 (2 %) | Pneumocystis Pneumonia | 2 (4 %) |
| 2 (1 %) | Viral Pneumonia | 5 (11 %) |
| 8 (5 %) | Other infectious diseases (Klebsiella pneumonia, Bronchiolitis, mycobacterium avium complex infection, fungal pneumonia, etc.) | 15 (33 %) |
| 1 (0,6 %) | Abscess | 5 (11 %) |
| 21 (13 %) | Lung cancer and metastasis | 9 (20 %) |
| 5 (3 %) | Idiopathic pulmonary fibrosis | |
| 8 (5 %) | Organizing Pneumonia | |
| 3 (2 %) | Lipoid Pneumonia | |
| 5 (3 %) | Diffuse pulmonary haemorrhage and trauma | 1 (2 %) |
| 5 (3 %) | Hypersensitivity pneumonitis | |
| 5 (3 %) | Pulmonary edema | |
| 2 (1 %) | ARDS | |
| 86 (51 %) | Other (Aspergillosis, Sarcoidosis, Histioplasmosis, Silicosis, eosinophilic pneumonia, Granulomatosis with polangitis, drug induced lung disease, Giant cell interstitial pneumonia, Cystic fibrosis, etc.) |
Fig. 1ROC curves of the ML model classifier performance in the validation (A) and the independent testing (B) dataset. The dichotomous performance of both radiologists (R1, R2) in the test dataset is plotted as a comparator in B. Diagnostic performance metrics are given in Table 2.
diagnostic performance indices of the ML model classifier in validation and testing datasets compared with two radiologists performance.
| AUC | Threshold | Sensitivity% | Specificity% | LR+ | LR- | ||
|---|---|---|---|---|---|---|---|
| ML Model | Validation Dataset | 0.986 (0.978−0.992) | Rule-out (>0.0006) | 99.3 | 75.8 | 4.1 | 0.009 |
| Rule-in (>0.4) | 92.2 | 96.3 | 25.0 | 0.081 | |||
| Test Dataset | 0.956 (0.890−0.988) | Rule-out (>0.0006) | 100 | 60 | 2.5 | <0.01 | |
| Rule-in (>0.4) | 84.4 | 93.3 | 12.7 | 0.17 | |||
| Radiologists | Radiologist 1 (Test Dataset) | 0.867 (0.779−0.929) | n.a. | 82.2 | 91.1 | 9.25 | 0.2 |
| Radiologist 2 (Test Dataset) | 0.889 (0.805−0.945) | n.a. | 80 | 97.8 | 36.0 | 0.2 |
Fig. 2Fagan nomogram illustrating the ML classifier performance in case of a pretest probability of 10 %. Dashed lines represent validation sample and straight lines independent test sample results. A positive result exceeding the rule-in COVID-19 threshold is indicated by the black lines while a negative result below the rule-out threshold for COVID-19 is indicated by the grey lines.
Fig. 3Advanced stage COVID-19 with consolidations on the left which was falsely diagnosed as non-COVID-19 (false-negative); Pneumocystis carinii pneumonia on the right which was falsely diagnosed as COVID-19 (false-positive).