| Literature DB >> 35159930 |
Ke Cao1,2, Karin Verspoor3,4, Srujana Sahebjada1,2, Paul N Baird2.
Abstract
(1) Background: The objective of this review was to synthesize available data on the use of machine learning to evaluate its accuracy (as determined by pooled sensitivity and specificity) in detecting keratoconus (KC), and measure reporting completeness of machine learning models in KC based on TRIPOD (the transparent reporting of multivariable prediction models for individual prognosis or diagnosis) statement. (2)Entities:
Keywords: artificial intelligence; diagnosis; early detection; keratoconus; machine learning; reporting completeness
Year: 2022 PMID: 35159930 PMCID: PMC8836961 DOI: 10.3390/jcm11030478
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.241
Figure 1The search strategy used in the present study.
Figure 2The PRISMA flowchart illustrating the literature selection process.
Figure 3The count of machine learning studies in KC from 1995 to 2020.
Identified studies using machine learning in detection of KC and early KC.
| Study Objectives | First Author | Year | No. of Centers Involved (Country) | Sample Size | No. of KC/Early KC Eyes | Machine Learning Method/s Used | Data Type (No. of Parameters) | Corneal Imaging Modality | Evaluation Methods |
|---|---|---|---|---|---|---|---|---|---|
| Detect KC eyes from controls | Maeda et al. [ | 1995 | 1 (USA) | 176 | 44 | Combined discriminant analysis and classification tree | P (8) | TMS-1 | Internal |
| Kalin et al. [ | 1996 | NR | 106 | 5 | Combined discriminant analysis and classification tree | P (8) | TMS-1 | Validation study | |
| Rabinowitz et al. [ | 1998 | 1 (USA) | 241 | 99 | Linear discriminant analysis | P (5) | TMS-1 | Internal | |
| Twa et al. [ | 2005 | NR (USA) | 244 | 112 | Decision tree | P (36) | Keratron | Internal | |
| Bessho et al. [ | 2006 | 2 (Japan) | 165 | 63 | logistic regression | P (na) | Orbscan II | External | |
| Saad et al. [ | 2010 | NR | 143 | 31 | Discriminant analysis | P (51) | Orbscan IIz | Internal | |
| Smadja et al. [ | 2013 | 1 (France) | 325 | 148 | Decision tree | P (55) | GALILEI | Internal | |
| Mahmoud et al. [ | 2013 | 3 (Colombia, USA, Switzerland) | 407 | 163 | logistic regression | P (na) | GALILEI | External | |
| Saad et al. [ | 2014 | 1 (France) | 166 | 64 | Discriminant analysis | P (7) | Orbscan IIz | Internal | |
| Silverman et al. [ | 2014 | 1 (UK) | 204 | 74 | Multiple methods | P (161) | Artemis-1 | Internal | |
| Koprowski et al. [ | 2015 | 1 (Brazil) | 746 | 477 | Decision tree | P (11) | Corvis | Internal | |
| Shetty et al. [ | 2015 | 1 (India) | 128 | 85 | Logistic regression | P (na) | Pentacam | Internal | |
| Kovacs et al. [ | 2016 | 1 (Hungary) | 120 | 60 | Neural network | P (na) | Pentacam HR | Internal | |
| Ruiz et al. [ | 2016 | 1 (Belgium) | 648 | 454 | Support vector machine | P (22) | Pentacam HR | Internal | |
| Ambrosio et al. [ | 2017 | 2 (Brazil, Italy) | 756 | 276 | Multiple methods | P (na) | Pentacam HR & Corvis ST | Internal | |
| Silverman et al. [ | 2017 | 1 (USA) | 141 | 30 | Discriminant analysis | P (240) | Artemis-1 & Pentacam | Internal | |
| Lopes et al. [ | 2018 | 5 (UK, Brazil, Italy, USA) | 3648 | 370 | Multiple methods | P (na) | Pentacam | Internal & External | |
| Chandapura et al. [ | 2019 | NR | 439 | 218 including 102 early KC | Random forest | P (27) | Pentacam & OCT | Internal | |
| * Dos Santos et al. [ | 2019 | 1 (Austria) | 142 | 70 | Convolutional neural network | I | OCT | Internal | |
| Issarti et al. [ | 2019 | 1 (Belgium) | 624 | 312 | Neural network | P (28) | Pentacam | Internal | |
| Kamiya et al. [ | 2019 | 1 (Japan) | 543 | 304 | Convolutional neural network | I | AS-OCT | Internal | |
| * Lavric et al. [ | 2019 | NR | 3000 | 1500 | Convolutional neural network | I | SyntEyes model | Internal | |
| Leão et al. [ | 2019 | 2 (Brazil, Italy) | 574 | 223 | Discriminant analysis | P (na) | Corvis ST | NR | |
| Bolarin et al. [ | 2020 | 1 (Spain) | 169 | 107 | logistic regression | P | Sirius | Internal | |
| Castro-Luna et al. [ | 2020 | 1 (Spain) | 60 | 30 | Naive Bayes | P | CSO | Internal | |
| * Issarti et al. [ | 2020 | 2 (Belgium) | 812 | 508 | Neural Network | P (90) | Pentacam HR | Internal & External | |
| Kuo et al. [ | 2020 | 1 (Taiwan) | 326 | 170 | Convolutional neural network | I | TMS-4 | Internal | |
| Lavric et al. [ | 2020 | NR | 3151 | 1181 including 791 early KC | Multiple methods | P (443) | SS-1000 CASIA OCT | Internal | |
| * Shi et al. [ | 2020 | 1 (China) | 121 | 38 | Neural network | P (49) | UHR-OCT & Pentacam HR | Internal | |
| Velazquez-Blazquez et al. [ | 2020 | 1 (Spain) | 178 | 104 including 61 early KC | Logistic regression | P (27) | Sirius | Internal | |
| Detect early KC eyes from controls | Saad et al. [ | 2010 | NR | 143 | 40 | Discriminant analysis | P (51) | Orbscan IIz | Internal |
| Smadja et al. [ | 2013 | 1 (France) | 224 | 47 | Decision tree | P (55) | GALILEI | Internal | |
| * Ventura et al. [ | 2013 | NR (Brazil) | 204 | 68 | Neural network | P (41) | Ocular Response Analyzer | Internal | |
| Chan et al. [ | 2015 | 1 (Singapore) | 128 | 24 | Discriminant analysis | P (na) | Orbscan IIz | Validation study | |
| Kovacs et al. [ | 2016 | 1 (Hungary) | 75 | 15 | Neural network | P (na) | Pentacam HR | Internal | |
| Ruiz et al. [ | 2016 | 1 (Belgium) | 261 | 67 | Support vector machine | P (22) | Pentacam HR | Internal | |
| Ambrosio et al. [ | 2017 | 2 (Brazil, Italy) | 574 | 94 | Multiple methods | P (na) | Pentacam HR & Corvis ST | Internal | |
| Xu et al. [ | 2017 | 1 (China) | 363 | 77 | Discriminant analysis | P (na) | Pentacam HR | Internal | |
| Lopes et al. [ | 2018 | 5 (UK, Brazil, Italy, USA) | 3537 | 259 | Multiple methods | P (na) | Pentacam | Internal & External | |
| Issarti et al. [ | 2019 | 1 (Belgium) | 389 | 77 | Neural network | P (28) | Pentacam | Internal | |
| Cao et al. [ | 2020 | 1 (Australia) | 88 | 49 | Multiple methods | P (11) | Pentacam | Internal | |
| * Issarti et al. [ | 2020 | 2 (Belgium) | 812 | 117 | Neural Network | P (90) | Pentacam HR | Internal & External | |
| * Kuo et al. [ | 2020 | 1 (Taiwan) | 354 | 28 | Convolutional neural network | I | TMS-4 | Internal | |
| * Shi et al. [ | 2020 | 1 (China) | 121 | 33 | Neural network | P (49) | UHR-OCT & Pentacam HR | Internal | |
| KC Severity | Yousefi et al. [ | 2018 | multi-center (Japan) | 3156 | Density-based clustering | P (420) | CASIA OCT | NA |
Study objectives: The aim of the research. It was either detecting KC from controls or detecting early KC from controls in this study. No. of Centers involved: The number of centers involved is reported, NR indicated the center is not reported explicitly. Data type (No. of parameters): The kind of data used as inputs to machine learning models. It was either images (graphics) or parameters in this study (numeric). The letter ‘P’ denoted parameters, while the letter ‘I’ denoted images. Corneal Imaging modality: Where the imaging system/systems of the input data was derived. Evaluation methods: Described how the model’s performance was determined. External (evaluation in an independent database), internal (bootstrap validation, cross validation, random training test splits, temporal splits). Asterisks (*) indicated studies that were excluded from the meta-analysis.
Diagnostic performance of artificial intelligence in detection of KC versus controls using different imaging modalities.
| Imaging Modalities | Pooled Sensitivity | Pooled Specificity |
|---|---|---|
| Pentacam ( | 0.987 (95% CI 0.971–0.994) | 0.989 (95% CI 0.963–0.997) |
| TMS ( | 0.943 (95% CI 0.897–0.969) | 0.978 (95% CI 0.954–0.989) |
| Orbscan ( | 0.947 (95% CI 0.886–0.976) | 0.983 (95% CI 0.917–0.997) |
| Pooled total ( | 0.970 (95% CI 0.949–0.982) | 0.985 (95% CI 0.971–0.993) |
Diagnostic performance of machine learning on detection early KC using different imaging modalities.
| Imaging Modalities | Pooled Sensitivity | Pooled Specificity |
|---|---|---|
| Pentacam ( | 0.882 (95% CI 0.795–0.935) | 0.935 (95% CI 0.874–0.967) |
| Orbscan ( | 0.842 (95% CI 0.504–0.965) | 0.958 (95% CI 0.821–0.991) |
| Pooled total ( | 0.882 (95% CI 0.822–0.923) | 0.947 (95% CI 0.914–0.967) |
Figure 4Summary receiver-operating characteristic curves of the diagnostic performance of machine learning detecting KC (black circle) and early KC (triangle) from controls using Pentacam parameters. The white circle is the summary estimate point (sensitivity (0.956 (95% CI 0.897–0.982), specificity (0.968 (95% CI 0.931–0.985)) of studies using Pentacam parameters. The Y-axis represents sensitivity, with higher values indicating greater sensitivity, and the X-axis represents false positive rate, which was equal to 1-specificity, with lower values indicating greater specificity.
Characteristics of machine learning-assisted studies for detection of KC severity.
| First Author | Year | Severity Grading | Definition/Classification Methods | Corneal Imaging Modality | Reported Sensitivity in Detection of Each Severity Level |
|---|---|---|---|---|---|
| Maeda et al. [ | 1995 | Mild (15) | NA | TMS-1 | Mild: 100% |
| Kamiya et al. [ | 2019 | Grade 1 (108) | Amsler–Krumeich classification | AS-OCT | Grade 1: 88.9% |
| Issarti et al. [ | 2019 | Mild KC (220) | a Self-defined | Pentacam | 98.81% |
| Issarti et al. [ | 2019 | Moderate KC (229) | b Self-defined | Pentacam | 99.91% |
| Bolarin et al. [ | 2020 | Grade I (44) | RETICS grading | Sirius | Grade I: 59.1% |
| Velazquez-Blazquez et al. [ | 2020 | Mild KC (42) | RETICS grading | Sirius | Mild KC: 63% |
a A clear cornea, tomography maps compatible with KC, a Fleischer ring at the apex base, slight thinning, and anterior and/or posterior corneal steepening; b Slit-lamp findings compatible with KC, corneal thinning at the apex, Vogt striae, a clearly visible Fleischer ring and corneal tomography compatible with KC; The severity of KC was considered to be increasing from Grade 1 to Grade 4 and for Grade I to Grade IV plus.
Figure 5Overall adherence per TRIPOD item.