| Literature DB >> 33164982 |
Yanhong Yang1, Fleming Y M Lure2,3, Hengyuan Miao4, Ziqi Zhang4, Stefan Jaeger5, Jinxin Liu1, Lin Guo2.
Abstract
BACKGROUND: Accurate and rapid diagnosis of coronavirus disease (COVID-19) is crucial for timely quarantine and treatment.Entities:
Keywords: COVID-19; artificial intelligence (AI); computed tomography (CT); deep learning
Mesh:
Year: 2021 PMID: 33164982 PMCID: PMC7990455 DOI: 10.3233/XST-200735
Source DB: PubMed Journal: J Xray Sci Technol ISSN: 0895-3996 Impact factor: 1.535
Distribution of training dataset
| Category | Total (case) | Sex | Average±standard deviation age | Maximum age | Minimum age | Number of CT slices with lesions | |
| Male | Female | ||||||
| Tuberculosis | 337 | 237 | 100 | 44±20 | 89 | 12 | 7728 |
| Common pneumonia | 83 | 49 | 34 | 64±16 | 89 | 14 | 4552 |
| Non-COVID-19 viral pneumonia | 57 | 29 | 28 | 60±18 | 91 | 18 | |
| COVID-19 pneumonia | 32 | 20 | 12 | 50±14 | 68 | 26 | 1566 |
Distribution of testing dataset
| Category | Total (case) | Sex | Average±standard deviation age | Maximum age | Minimum age | Total number of CT slices with lesions | |
| Male | Female | ||||||
| COVID-19 Positive | 86 | 41 | 45 | 51±15 | 82 | 15 | 38211 |
| COVID-19 Negative | 99 | 73 | 26 | 44±16 | 87 | 12 | 45156 |
Fig. 1Age distribution of patients in the testing dataset.
Fig. 2Distribution of the severity of positive cases in the testing dataset.
Fig. 3Workflow diagram for the development and evaluation of the AI model for detecting COVID19 pneumonia.
Fig. 4Main framework of AI detection system.
Clinical Characteristics of COVID-19 and non-COVID-19 pneumonia patient cohorts
| COVID-19 | non-COVID-19 | |
| ( | ( | |
| Age | ||
| mean | 51.37±15.38 | 43.81±16.82 |
| <20 | 2 | 3 |
| 20–39 | 21 | 45 |
| 40–59 | 31 | 31 |
| > =60 | 32 | 20 |
| Sex | ||
| Male | 41 | 73 |
| Female | 45 | 26 |
| Underlying Diseases | ||
| None (22) | Aids (56) | |
| Proteinemia (12) | None (21) | |
| Lipemia (10) | Liver disease (21) | |
| Hypokalemia (10) | ||
| CT findings | ||
| Patch | Bronchovascular Bundle thickening and increasing | |
| Ground-glass opacity (GGO) | Nodules | |
| Patch | ||
| Severity | ||
| Mild type (6) | ||
| Moderate type (65) | ||
| Severe type (10) | ||
| Critical type (5) |
Fig. 5ROC curve of COVID-19 vs notCOVID-19.
Evaluation indicators of AI detection software for the test data diagnosis results
| Evaluation index | AUC | Accuracy | SS | SP | PPV | NPV |
| Result | 0.903 | 0.914 | 0.918 | 0.909 | 0.898 | 0.928 |
Notation: AUC = area under curve; SS = sensitivity; SP = specificity; PPV = positive predictive value; NPV = negative predictive value.
COVID-19 suspicion
| Diagnosis conclusion | AI prediction - COVID-19 suspicion | ||
| COVID-19 | Positive | Average | 0.683 |
| Standard deviation | 0.207 | ||
| 95% CI | [0.639, 0.727] | ||
| Negative | Average | 0.170 | |
| Standard deviation | 0.192 | ||
| 95% CI | [0.132, 0.208] | ||
Fig. 6COVID-19 and non-COVID-19 suspicion by using AI.
Fig. 7ROC curves of radiologist average on the test set without AI assistance (left) and with (right) AI assistance.
The results of three radiologists without and with AI assistance on the test set (n = 185) in differentiating between COVID-19 and non-COVID-19
| Without AI assistance (95% CI) | With AI assistance (95% CI) | Radiologist with AI assistance minus radiologist without AI assistance ( | ||
| Radiologist A | Accuracy | 0.914 (0.863–0.947) | 0.924 (0.876–0.955) | 0.01 |
| Sensitivity | 0.86 (0.77–0.92) | 0.884 (0.76–0.94) | 0.024 | |
| Radiologist B | Accuracy | 0.832 (0.772–0.88) | 0.908 (0.857–0.943) | 0.076 |
| Sensitivity | 0.884 (0.79–0.94) | 0.93 (0.85–0.97) | 0.046 | |
| Radiologist C | Accuracy | 0.886 (0.832–0.925) | 0.935 (0.889–0.964) | 0.049 |
| Sensitivity | 0.953 (0.909–0.976) | 0.953 (0.910–0.976) | 0 | |
| Radiologist Average | Accuracy | 0.941 (0.896–0.968) | 0.951 (0.909–0.976) | 0.010 |
| Sensitivity | 0.895 (0.845–0.934) | 0.942 (0.896–0.968) | 0.047 |
Fig. 8The comparison of three radiologists without and with AI assistance on the test set (n = 185).
Fig. 9Scores of COVID-19 suspicion for each patient when using AI.
Fig. 10Scores of COVID-19 suspicion for each patient by three radiologists without and with AI assistance. Blue dots represent AI suspicion and green dots represent radiologist suspicion.
Fig. 11AI suspicion degree on the test set (n = 185) in differentiating among COVID-19 in mild, moderate, severe and critical types.
Fig. 12Comparison of three radiologists without and with assistance of AI on the test set (n = 185) in differentiating among COVID-19 cases.