| Literature DB >> 33814762 |
Sabitha Krishnamoorthy1, Sudhakar Ramakrishnan2, Lanson Brijesh Colaco3, Akshay Dias4, Indu K Gopi5, Gautham A G Gowda6, K C Aishwarya6, Veena Ramanan7, Manju Chandran8.
Abstract
BACKGROUND: Whether the sensitivity of Deep Learning (DL) models to screen chest radiographs (CXR) for CoVID-19 can approximate that of radiologists, so that they can be adopted and used if real-time review of CXRs by radiologists is not possible, has not been explored before.Entities:
Keywords: Artificial intelligence; COVID 19; CXR; deep learning
Year: 2021 PMID: 33814762 PMCID: PMC7996677 DOI: 10.4103/ijri.IJRI_914_20
Source DB: PubMed Journal: Indian J Radiol Imaging ISSN: 0970-2016
COVID_COLLECT_TRAINING_SET (CCTS)
| USE | SOURCES | |
|---|---|---|
| Stage 1 | ||
| DS1 | Random (non-radiograph) images for training | tiny-imagenet.herokuapp.com ( |
| DS2 | Random radiographs for training | Repository of all types of radiographs belonging to investigators and some images from NIH_imagenet ( |
| Stage 2 | ||
| DS3 | Non-CXR radiographs for training | Repository of Non-CXRs belonging to investigators ( |
| DS4 | “Normal” CXRs for training the DL model | NIH_imagenet ( |
| montgomery_dataset ( | ||
| thrissurnormal_dataset ( | ||
| Normal_KVG ( | ||
| Normal_NIH ( | ||
| Normal_SNH pelvis-osteoporotic 18 ( | ||
| DS5 | “Abnormal” Mix includes COVID and other chest abnormalities for training the DL model | NIH_imagenet ( |
| COVID-19 ( | ||
| Covidspain_dataset ( | ||
| Covid19 cases_cxr (n=31) | ||
| Ieee8023-covid-chestxray-dataset ( | ||
| Montgomery_dataset ( | ||
| SNH ( | ||
| Stage 3 | ||
| DS6 | “COVID CXR” for training the DL model | Covidspain_dataset ( |
| Ieee8023-covid-chestxray-dataset ( | ||
| NIH_imagenet ( | ||
| Covid19 cases_cxr ( | ||
| COVID-19 ( | ||
| SNH_dataset ( | ||
| DS7 | “Non-COVID Abnormal CXR” for training the DL model | NIH_imagenet ( |
| Montgomery_dataset ( | ||
| SNH_dataset ( |
Validation study Dataset
| COVID_COLLECT_REALWORLD_VALIDATON_SET (CCRVS) | REAL WORLD IMAGES TEST | Total No. of images in the dataset | Images run through the DL model | Images EXCLUDED (from analysis) | Images used for analysis of DL model performance | ||
|---|---|---|---|---|---|---|---|
| Dataset compiled by investigators [DS8] | RANDOM Non-COVID CXRs [DS10] | ADFR | 60 | 60 | 18 | 42 | |
| ADFR2 | 79 | 79 | 12 | 67 | |||
| ADFR3 | 32 | 32 | 4 | 28 | |||
| NKVG | 16 | 16 | 0 | 16 | |||
| NKVG2 | 8 | 8 | 1 | 7 | |||
| NKVG3 | 22 | 22 | 0 | 22 | |||
| N_SNH | 40 | 40 | 1 | 39 | |||
| N_NIH | 32 | 32 | 0 | 32 | |||
| CXRs of RT-PCR confirmed COVID cases [DS11] | BK | 8 | 8 | 0 | 8 | ||
| CBE | 9 | 9 | 3 | 6 | |||
| DP | 5 | 5 | 0 | 5 | |||
| MJRI | 2 | 2 | 0 | 2 | |||
| OJ | 6 | 6 | 0 | 6 | |||
| RP | 31 | 31 | 4 | 27 | |||
| AP | 5 | 5 | 1 | 4 | |||
| Covidspain_ dataset [DS9] | CXRs of RT-PCR positive COVID cases | SPAIN | 129 | 129 | 10 | 119 | |
| DL Model VALIDATION STUDY on June 10, 2020 using CCRVS | TOTAL IMAGES IN CCRVS | 484 | |||||
| IMAGES EXCLUDED FROM CCRVS | 54 | ||||||
| IMAGES ANALYSED FOR STUDY FROM CCRVS | 430 | ||||||
Statistical Classification of DL models Performance
| Svita_DL8 | DL model misdiagnoses a normal CXR as COVID | DL model misdiagnoses a COVID CXR as Normal | DL model correctly classifies as COVID | DL model correctly classifies as Non-COVID | When the DL model predicts COVID positive, how often is it correct? TP/(TP + FP) | Recall How many of the COVID cases did DL model identify? TP/(TP + FN) | The proportion of CXRs of Actual non-COVID patients who are identified as Non-COVID by the DL model TN/(TN + FP) | F1 Overall measure of a model's accuracy: 2*(P*R)/(P + R) | Diagnostic Accuracy (TP + TN)/(TP + FP + FN+TN) | Reasons are mentioned under exclusion criteria in Discussion |
|---|---|---|---|---|---|---|---|---|---|---|
| June 10, 2020 test run results | False Positive (FP) | False Negative (FN) | True Positive (TP) | True Negative (TN) | Precision (PPV) | Sensitivity (SN) | Specificity (SP) | F1 Score | Diagnostic Accuracy | Excluded from analysis |
| ADFR | 23 | 0 | 0 | 19 | 0 | 0 | 0.452381 | 0 | 0.452381 | 18 |
| ADFR2 | 49 | 0 | 0 | 18 | 0 | 0 | 0.268657 | 0 | 0.268657 | 12 |
| ADFR3 | 20 | 0 | 0 | 8 | 0 | 0 | 0.285714 | 0 | 0.285714 | 4 |
| COVID_BK | 0 | 1 | 7 | 0 | 1 | 0.875 | 0 | 0.933333 | 0.875 | 0 |
| COVID_CBE | 0 | 0 | 6 | 0 | 1 | 1 | 0 | 1 | 1 | 3 |
| COVID_DP | 0 | 0 | 5 | 0 | 1 | 1 | 0 | 1 | 1 | 0 |
| COVID_MJRI | 0 | 0 | 2 | 0 | 1 | 1 | 0 | 1 | 1 | 0 |
| COVID_OJ | 0 | 0 | 6 | 0 | 1 | 1 | 0 | 1 | 1 | 0 |
| COVID_RP | 0 | 1 | 9 | 0 | 1 | 0.9 | 0 | 0.947368 | 0.9 | 4 |
| COVID_RP15-31 | 0 | 1 | 16 | 0 | 1 | 0.941176 | 0 | 0.969697 | 0.941176 | 0 |
| COVID_SPAIN | 0 | 12 | 107 | 0 | 1 | 0.899160 | 0 | 0.946903 | 0.899160 | 10 |
| COVID_AP | 0 | 0 | 4 | 0 | 1 | 1 | 0 | 1 | 1 | 1 |
| NKVG | 3 | 0 | 0 | 13 | 0 | 0 | 0.8125 | 0 | 0.8125 | 0 |
| NKVG2 | 0 | 0 | 0 | 7 | 0 | 0 | 1 | 0 | 1 | 1 |
| NKVG3 | 15 | 0 | 0 | 7 | 0 | 0 | 0.318182 | 0 | 0.318182 | 0 |
| N_NIH | 3 | 0 | 0 | 29 | 0 | 0 | 0.90625 | 0 | 0.90625 | 0 |
| N_SNH | 0 | 0 | 0 | 39 | 0 | 0 | 1 | 0 | 1 | 1 |
| OVERALL | 113 | 15 | 162 | 140 | 0.589091 | 0.915254 | 0.553360 | 0.716814 | 0.702326 | 54 |
Figure 1Comparison of deep learning model to radiologist interpretation
Figure 2Distribution of false-negative CXR interpretations for COVID-19 by the deep learning model
Figure 3Distribution of false-positive CXR interpretations for COVID-19 by the deep learning model
Figure 4Examples of the interpretation of CXR by the deep learning model