| Literature DB >> 35453917 |
Ciara Mulrenan1, Kawal Rhode1, Barbara Malene Fischer1,2,3.
Abstract
A COVID-19 diagnosis is primarily determined by RT-PCR or rapid lateral-flow testing, although chest imaging has been shown to detect manifestations of the virus. This article reviews the role of imaging (CT and X-ray), in the diagnosis of COVID-19, focusing on the published studies that have applied artificial intelligence with the purpose of detecting COVID-19 or reaching a differential diagnosis between various respiratory infections. In this study, ArXiv, MedRxiv, PubMed, and Google Scholar were searched for studies using the criteria terms 'deep learning', 'artificial intelligence', 'medical imaging', 'COVID-19' and 'SARS-CoV-2'. The identified studies were assessed using a modified version of the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD). Twenty studies fulfilled the inclusion criteria for this review. Out of those selected, 11 papers evaluated the use of artificial intelligence (AI) for chest X-ray and 12 for CT. The size of datasets ranged from 239 to 19,250 images, with sensitivities, specificities and AUCs ranging from 0.789-1.00, 0.843-1.00 and 0.850-1.00. While AI demonstrates excellent diagnostic potential, broader application of this method is hindered by the lack of relevant comparators in studies, sufficiently sized datasets, and independent testing.Entities:
Keywords: SARS-CoV-2; artificial intelligence; deep learning; medical imaging
Year: 2022 PMID: 35453917 PMCID: PMC9025113 DOI: 10.3390/diagnostics12040869
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Modified TRIPOD assessment of X-ray studies.
| Lead Author | Year | Country | Study Type | Aim | Dataset | Reference Standard | Comparator | Validation | External Testing | Main Findings | CRS | Funding |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [ | 2020 | United States of America | Case control | Detection of COVID-19 from X-ray | Training: 103 COVID-19 images (GitHub COVID image dataset), 500 non-COVID but pathological, 500 normal (Kaggle RSNA Pneumonia Detection Challenge dataset). | X-ray pre-annotated by Radiographer | No | Cross-validation | Yes | Machine-learning algorithm can diagnose cases of COVID-19 from Chest X-ray. | 2 | Not disclosed |
| [ | 2020 | United States of America | Case control | Detection of COVID-19 from X-ray | Training: 4698, Validation 523, Testing 580. | X-ray pre-annotated by Radiographer | No | K-fold cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest X-ray. | 2 | No external funding |
| [ | 2020 | Israel | Retrospective | Detection of COVID-19 from X-ray | Training: 2076 | X-ray annotated by Radiographer and positive RT-PCR test. | No | Cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from a portable chest X-ray. | 2 | Not disclosed |
| [ | 2020 | United States of America | Case control | Detection of COVID-19 from X-ray | Training: 6324, Validation: 1574, Test 1970. | X-ray pre-annotated by Radiographer | No | Cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest X-ray. | 2 | No external funding |
| [ | 2020 | United States of America | Retrospective | Detection of COVID-19 from X-ray | COVID = 455 (Cohen, 2020) | X-ray annotated by Radiographer | No | Epoch K-fold cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from a portable chest X-ray. | 2 | No external funding |
| [ | 2020 | Bangladesh | Case control | Detection of COVID-19 from X-ray | Training: 17,749, of which 232 COVID-19. | X-ray pre-annotated by Radiographer | No | K-fold cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest X-ray. | 2 | No external funding |
| [ | 2020 | United States of America | Retrospective | Detection of COVID-19 from X-ray | Training and Testing split 75:25 randomly. | X-ray pre-annotated by Radiographer and/or a positive RT-PCR test. | No | K-fold cross-validation (5-fold). | No | Machine-learning algorithm can diagnose cases of COVID-19 from a portable chest X-ray. | 2 | No external funding |
| [ | 2020 | Brazil | Case control | Detection of COVID-19 from X-ray | Training: 5715 | Positive RT-PCR. | No | K-fold Cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest X-ray and a full clinical history/examination. | 2 | No external funding |
| [ | 2020 | Turkey | Case control | Detection of COVID-19 from X-ray | Training: 80% | X-ray in recovered patients, pre-annotated by Radiographer. | No | K-fold Cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest X-ray. | 2 | Not disclosed |
| [ | 2020 | Japan | Case control | Detection of COVID-19 from X-ray | Training: 410 COVID-19 (GitHub, Dr Cohen), 500 non-COVID (NIH, ChexPert) | X-ray pre-annotated by Radiographer | No | K-fold Cross-validation (10-fold) | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest X-ray. | 2 | No external funding |
| [ | 2020 | United States of America | Case control | Detection of COVID-19 from X-ray | Training: 84 COVID-19 (Radiopaedia, Societa Italiana di Radiologia Medica, GitHub Dr Cohen), 83 Normal (Mooney et al.). | X-ray pre- annotated by Radiographer. | No | Cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest X-ray. | 2 | Not disclosed |
Modified TRIPOD assessment for CT studies.
| Lead Author | Year | Country | Study Type | Aim | Dataset | Reference Standard | Comparator | Validation | External Testing | Main Findings | CRS | Funding |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [ | 2020 | United States of America | Case control | Detection of COVID-19 from chest CT, by discriminating between Ground Glass Opacities in COVID-19 and Milliary Tuberculosis. | 606 COVID-19 | CT slides pre-annotated by Radiographer | No | Cross-validation | Yes | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | Veteran Affairs Research Career Scientist Award, VA COVID Rapid Response Support, University of South Florida Strategic Investment Program Fund, Department of Health, Simons Foundation, Microsoft and Google. |
| [ | 2020 | China | Case control | Detection of COVID-19 from chest CT. | Training: 1210 COVID-19, 1985 Non-COVID. | CT slides pre- annotated by Radiographer | No | Epoch (200) cross-validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | Not disclosed |
| [ | 2020 | India | Case control | Detection of COVID-19 from chest CT. | Training: 1984 | CT slides pre-annotated by Radiographer | No | Epoch (30) Cross validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | No external funding |
| [ | 2021 (newest version) | Iran | Case control | Detection of COVID-19 from chest CT. | Training: 3023 Non-COVID, 1773 COVID-19. | CT slides pre-annotated by Radiographer, confirmed by 2 pulmonologists, correct clinical presentation, positive RT-PCR report. | Yes, 4 experienced radiologists. | Cross validation | Yes | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 4 | No external funding |
| [ | 2020 | China | Case control | Detection of COVID-19 from chest CT. | Training: 499 COVID-19 | CT slides annotated by Radiographer | No | (100 epoch’s) cross validation. | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | No external funding. |
| [ | 2020 | United States of America | Case control | Detection of COVID-19 from chest CT. | Training: 657 COVID-19, 2628 Non-COVID.Validation: 120 COVID-19, 477 Non-COVID. | CT slides annotated based off RT-PCR reporting. | No | (50 epoch’s) cross validation | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | Not disclosed |
| [ | 2020 | China | Case control | Detection of COVID-19 from chest CT. | Training: 642 COVID-19, 674 Non-COVID. | CT slides pre-annotated by 2 Radiographers (with 30+ years experience) | Yes, 8 radiologists (4 from COVID-19 hospitals, 4 from non-covid hospitals). | Cross Validation | Yes | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 4 | Not disclosed |
| [ | 2020 | Belgium | Case Cotnrol | Detection of COVID-19 from chest CT. | Training: 80% | CT slides annotated based off | No (restrospectively assessed radiologists performance but not simultaneously with the algorithm). | K-Fold cross-validation (10-fold). | No | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | Interreg V-A Euregio Meuse-Rhine, ERC grant, European Marie Curie Grant. |
| [ | 2020 | South Korea | Case Control | Detection of COVID-10 from chest CT. | Training: 1194 COVID-19 (Wonkwang Hospital, Chonnam Hospital, Societa Italiana di Radiologia Medica), 2799 Non-COVID (inc. pneumonia, normal lung, lung cancer, non-pneumonia pathology—all from Wonkwang Hospital) | CT slides pre-annotated by Radiographer | No | K-Fold cross-validation (5-fold). | Yes | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | National Research Foundation of Korea, Ministry of Science, ICT and Future Planning via Basic Science Research Program. |
| [ | 2020 | United States of America | Case Control | Detection of COVID-19 from chest CT. | Training: 526 COVID-19, 533 Non-COVID | CT slides annotated based off | No | Cross validation | Yes | Machine-learning algorithm can diagnose cases of COVID-19 from chest CT. | 2 | NIH Centre for Intervential Oncology, Intramural Research Program of the National Institutes of Health and the NIH Intramural Targeted Anti-COVID-19 Program. |
| [ | 2020 | United States of America. | Case Control | Detection of COVID-19 from chest CT. | Training: 242 COVID-19, 292 Non-COVID. | CT slides annotated based off | Yes, 2 radiologists. | Cross Validation | No | Demographical information (travel, exposure, patient age, sex, WBC count and symptoms) combined with output from machine-learning algorithm can diagnose cases of COVID-19. | 3 | US NIH grant. |
| [ | 2020 | China | Retrospective | Detection of COVID-19 from chest CT. | Training: 1294 COVID-19, 1969 Non-COVID. | CT slides annotated by 5 radiographers. | Yes, 5 radiologists. | Cross validation | Yes | Not disclosed. | 4 | Not disclosed |
Figure 1PRISMA flow chart displaying the selection process of chosen studies, including total identified records, number screened, number of duplicates, included and excluded records for inclusion.
Summary of results from AI Studies for chest X-ray COVID-19 classification.
| Author | Dataset | Deep Learning Model | 2D/3D | All Data | COVID Data | Train | Validation All/COVID | TestAll/COVID | Sensitivity | Specificity | AUC | Dataset | Code URL |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [ | COVID-19/non-COVID pneumonia/normal | Microsoft CustomVision | 2d | 1000 | 500 | 970/474 | 30/10 | / | 100 | 95 | / | Public |
|
| [ | COVID-19/non-COVID infection/normal | AIDCOV using VGG-16 | 2d | 5801 | 269 | 4698 | 523 | 580 | 99.3 | 99.98 | n/a | Public |
|
| [ | COVID-19/normal | RetNet50 | 2d | 2427 | 360 | 2120/84 | 350/178 | n/a | 87.1 | 92.4 | 0.94 | Institutional dataset |
|
| [ | COVID-19/viral pneumonia/normal | VGG-16 | 2d | 1034 | 274 | 724/192 | / | 310/82 | / | / | 0.9978 | Public |
|
| [ | COVID-19/viral/bacterial pneumonia/TB | DenseNet-201 | 2d | 9868 | 494 | 6234/(n/a) | 1574/(n/a) | 1970/125 | 94.62 | / | / | Public |
|
| [ | COVID/non-COVID | HRNet | 2d | 1410 | 410 | n/a | / | 1410/410 | 98.53 | 98.52 | Public |
| |
| [ | COVID/bacterial pneumonia/viral pneumonia/normal | VGG-16 | 2d | 2031 | 445 | 1523/334/ | / | 508/111 | 79.0 | 93 | 0.85 | Public |
|
| [ | COVID/non-COVID | Faster R-CNN | 2d | 19,250 | 283 | 17,749/232 | / | 1501/51 | 97.65 | 95.48 | / | Public | / |
| [ | COVID-19/bacterial and viral pneumonia/normal | IKONOS | 2d | 6320 | 464 | / | / | 97.7 | 99.3 | / | Public |
| |
| [ | COVID-19/pneumonia/normal | DarkNet-19 | 2d | 1125 | 125 | 900/100 | / | 225/25 | 95.13 | 95.3 | / | Public |
|
| [ | COVID-19/non COVID-19 | Residual Att Net | 2d | 239 | 120 | 167/84 | 50/25 | 22/11 | 100 | 96 | 1 | Public |
|
Summary of results from AI Studies for CT COVID-19 classification.
| Author | Dataset | Deep Learning Model | 2D | All Data | COVID Data | Train | Validation All/COVID | Test | Sensitivity | Specificity | AUC | Dataset | Code URL |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [ | COVID-19/non-COVID disease/normal | 2d | 904 | 606 | 2685/2116 | / | 34/34 | 97.06 | / | 0.9664 | Institutional dataset | / | |
| [ | COVID-19/common pneumonia/normal | MNas3DNet41 | 3d | 3993 | 1515 | 3195/1213 | / | 798/302 | 86.09 | 93.15 | 0.957 | Public | / |
| [ | COVID-19/non-COVID-19 disease | GLCM | 2d | 2483 | 1252 | 1986/1002 | / | 497/250 | / | 98.77 | 0.987 | Public | / |
| [ | COVID-19/non-COVID pathological/normal | Ai-corona | 2d | 2121 | 720 | 1764/601 | / | 357/119 | 92.4 | 98.3 | 0.989 | Institutional dataset |
|
| [ | COVID-19/normal | DeCoVNet | 3d | 630 | 630 | 499 | / | 131 | 90.7 | 91.1 | 0.976 | Institutional dataset |
|
| [ | COVID/non-COVID | COVIDNet | 2d | 1993 | 920 | 1316/642 | 522/233 | 894/387 | 92.2 | 98.6 | 0.98 | Institutional dataset | / |
| [ | COVID/non-COVID | RadiomiX | 2d | 1381 | 181 | 1104/145 | / | 276/36 | 78.94 | 91.09 | 0.9398 | Institutional dataset | / |
| [ | COVID-19/bacterial/viral pneumonia | FCONet ft ResNet-50 | 2d | 4257 | 1194 | 3194/955 | / | 1063/239 | 99.58 | 100 | 1 | Institutional dataset e | / |
| [ | COVID-19/non COVID-19 | Densenet-121 | 3d | 2724 | 1029 | 1059/526 | 328/177 | 1337/326 | 84.0 | 93.0 | 0.949 | Mixed |
|
| [ | COVID-19/non-COVID-19 | Inception-ResNet-v2 | 3d | 905 | 419 | 534/242 | 92/43 | 279/134 | 82.8 | 84.3 | 0.92 | Institutional dataset |
|
| [ | COVID-19/viral pneumonia/bacterial pneumonia/influenza | OpenCovidDetector | 2d | 11,356 | 3084 | 2688/751 | / | 6337/2333 | 87.03 | 96.60 | 0.9781 | Public |
|
| [ | COVID-19/non COVID-19 | U-Net | 2d | 5212 (slices) | 275 | 3285/657 | 597/120 | 1330/266 | 96.3 | 93.6 | / | Public | / |
The share of datasets amongst studies.
| Dataset Name | Study Used |
|---|---|
| Kaggle RSNA Pneumonia Detection challenge dataset | [ |
| NIH | [ |
| SUNY | [ |
| LIDC | [ |
| CC-CCII | [ |
| Tianchi-Alibaba | [ |
| MosMedData | [ |
| Cohen database | [ |
| Italian society of Medical and Interventional Radiology | [ |
| WKUH (Wonkwang University hospital) | [ |
| CNUH (Chonnam National University Hospital) | [ |
| COVID-19-CT-dataset | [ |
| MDH (MasihDaneshvari Hospital) | [ |
| Peshmerga Hospital Erbil | [ |
Deep learning methods (CNN) used across all studies.
| Network Name | Study Used |
|---|---|
| HRNetHRNet | [ |
| Microsoft CustomVisionMicrosoft CustomVision | [ |
| GLCMGLCM | [ |
| ResNetResNet | [ |
| RadioMixRadioMix | [ |
| UNetUNet | [ |
| VGGVGG | [ |
| DenseNetDenseNet | [ |
| DarkNetDarkNet | [ |
| EfficientNetEfficientNet | [ |
Overview of AI and radiologist performance in studies with radiologist as comparator.
| Study | AI Performance | Radiologist Performance | Additional Information | Experience of the Radiologist | ||
|---|---|---|---|---|---|---|
| Sensitivity | Specificity | Sensitivity | Specificity | |||
| [ | 0.92 | 0.97 | 0.90 | 0.88 | Time taken to assess one image: | 4 radiologists, average experience 9.25 years. |
| [ | 0.92 | 0.99 | 0.77 | 0.90 | 4 radiologists, average experience 11.25 years. | |
| [ | 0.98 | 0.91 | 0.96 | 0.72 | Time taken to assess one image: | 5 radiologists, average experience of 8 years. |
| [ | 0.84 | 0.83 | 0.75 | 0.94 | 2 radiologists, average experience of 6 years. | |
Figure 2Box plot displaying the spread of results from CT and X-ray models. Interquartile range is demonstrated by the box, with the median value in the center. Whiskers display extremes at either ends.