| Literature DB >> 32322599 |
Yan Tong1, Wei Lu1, Yue Yu1, Yin Shen1,2.
Abstract
In clinical ophthalmology, a variety of image-related diagnostic techniques have begun to offer unprecedented insights into eye diseases based on morphological datasets with millions of data points. Artificial intelligence (AI), inspired by the human multilayered neuronal system, has shown astonishing success within some visual and auditory recognition tasks. In these tasks, AI can analyze digital data in a comprehensive, rapid and non-invasive manner. Bioinformatics has become a focus particularly in the field of medical imaging, where it is driven by enhanced computing power and cloud storage, as well as utilization of novel algorithms and generation of data in massive quantities. Machine learning (ML) is an important branch in the field of AI. The overall potential of ML to automatically pinpoint, identify and grade pathological features in ocular diseases will empower ophthalmologists to provide high-quality diagnosis and facilitate personalized health care in the near future. This review offers perspectives on the origin, development, and applications of ML technology, particularly regarding its applications in ophthalmic imaging modalities.Entities:
Keywords: Artificial intelligence; Deep learning; Machine learning; Ophthalmic imaging modalities
Year: 2020 PMID: 32322599 PMCID: PMC7160952 DOI: 10.1186/s40662-020-00183-6
Source DB: PubMed Journal: Eye Vis (Lond) ISSN: 2326-0254
Fig. 1The applications of AI techniques in the eye clinic
Representative algorithms in ML and DL
| AI Techniques | Classification | Algorithms |
|---|---|---|
| Conventional Machine learning | Supervised learning | SVM, Linear Regression, Logistic Regression, RF, KNN, Naïve Bayesian, Decision Tree, AdaBoost, Neural network methods |
| Unsupervised learning | Principal component analysis, K-means, Expectation-maximization, Mean shift, Hierarchical clustering, Affinity propagation, Iterative self-organizing data, fuzzy C-means systems | |
| Reinforcement learning | Q-learning, Temporal difference learning, State-Action-Reward-State-Action, Teaching-Box systems, Maja systems | |
| Deep learning | DBN | Convolutional deep belief network, Conditional restricted Boltzmann machine |
| CNN | AlexNet, GoogleNet, Visual geometry group network (VGG), Deep Residual Learning, Inception v4 (v2, v3), Restnet-152 (34,50,101), LeNet | |
| RNN | Bidirectional RNN, Long short-term memory |
DBN=deep belief network; CNN = convolution neural network; RNN = recurrent neural network; SVM = support vector machine; RF = random forest; KNN = k-nearest neighbor
Fig. 2The relationship among the subsets of AI. Machine learning techniques occurred in the 1980s, while deep learning techniques has been applied since the 2010s. Abbreviations: ML, machine learning; DL, deep learning
Fig. 3Schematic diagram of common algorithms in AI. a SVM are supervised learning models used to analyze the classification and regression of data. b RFs are an ensemble learning method that use multiple trees to train and predict samples. c CNNs are composed of layers of stacked neurons that can learn complex functions. d Reinforcement learning algorithms are used to train the action of an agent on an environment. Abbreviations: SVM, support vector machine; RF, random forest; CNN, convolutional neural networks
Fig. 4Top-5 error of representative CNN algorithms. Top-5 error: The probability of which none of the first five most probable labels given by the image classification algorithm is correct. Abbreviations: VGG, visual geometry group; GoogleNet, google inception net; ResNet, residual network
Fig. 5Open source DL research libraries with major programming languages including Python, C++, R, Java. Python libraries tend to be the most popular and can be used to implement recently available algorithms. Abbreviations: DL, deep learning
Fig. 6A diagram showing data processing. a The typical workflow of AI experimental process. b Illustration of k-fold cross-validation techniques (k = 10). Abbreviation: AUC, area under the curve
Common metrics in AI model evaluation
| Evaluation metrics | Definitions |
|---|---|
| Accuracy | The proportion of both positives and negatives that are correctly identified; the higher the accuracy, the better the classifier |
| Sensitivity/Recall | The proportion of positives that are correctly identified |
| Specificity | The proportion of negatives that are correctly identified |
| Precision | The proportion of positives that are correctly identified among all positive identified samples |
| Kappa value | To show the actual agreement between two sets of observations |
| Dice coefficient/F1 score | Harmonic average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0 |
Fig. 7Publication statistics of AI application. a. Publication statistics of AI application in different imaging modalities per year indexed on PubMed database (Jan 1st, 2016 to Oct 1st, 2019). b. Publication statistics of AI application in diagnosing different ophthalmological diseases per year indexed on PubMed database (Jan 1st, 2016 to Oct 1st, 2019)
FDA cleared medical AI products
| AI products | Production companies | Applications |
|---|---|---|
| Kardia App | Kardia Band, Alive Cor, United States | Clinical grade wearable electrocardiogram in Apple Watch |
| The WAVE Clinical Platform | Excel Medical Electronics, United States | Patient surveillance and predictive algorithm platform |
| Embrace Watch | Embrace, United States | The smartwatch that uses sensors to measure stress and predict seizures |
| Viz LVO | Viz.AI, United States | Automatic detection of large vessel occlusion in suspected stroke patients |
| Cognoa App | Cognoa, United States | An app based on ML that can help clinicians diagnose autism rapidly |
| Guardian Connect | Medtronic, United States | The continuous glucose monitoring system for people on multiple daily insulin injections |
| IDx-DR | IDx, United States | To automatic diagnose DR before it causes blindness |
| OsteoDetect | Imagen Technologies, United States | A type of computer-aided detection and diagnosis software designed to detect wrist fractures in patients |
| DreaMed Advisor Pro | DreaMed Diabetes, Petah Tikvah, Israel | Automated insulin pump setting adjustments in patients with type 1 diabetes |
| Viz CTP | Viz.AI, United States | A software package to perform image processing and analysis of CT perfusion scans of the brain |
FDA = U.S. food and drug administration; DR = diabetic retinopathy; CT = computer tomographic; ML = machine learning
Common publicly available databases
| Datasets | Imaging Modalities | Population | Amount | Annotation |
|---|---|---|---|---|
| Kaggle | FP | United States | 53,576 | DR |
| EyePACS [ | FP | United States | 35,126 | DR |
| MESSIDOR [ | FP | France | 1200 | DR; Macular edema |
| E-OPHTHA [ | FP | France | 463 | DR |
| HRF [ | FP | Germany | 45 | DR; Glaucoma; Optic Disk; Vessel; |
| DRIVE | FP | Netherlands | 40 | DR; Vessel |
| RIGA [ | FP | France; Saudi Arabia | 760 | Glaucoma |
| ORIGA-650 [ | FP | Singapore | 650 | Glaucoma |
| DRISHTI-GS [ | FP | India | 101 | Glaucoma |
| INSPRIRE-AVR [ | FP | United States | 40 | Glaucoma |
| REVIEW [ | FP | United Kingdom | 16 | Vascular disease |
FP = fundus photograph; DR = diabetic retinopathy
Summary of DL methods using FP and OCT to detect eye disease
| Authors | Year | Imaging Modalities | Aim | Data sets | DL techniques | Performance |
|---|---|---|---|---|---|---|
| Arcadu F et al. [ | 2019 | FP | Diabetic macular thickening detection | Local:17,997 FPs | Inception-v3 | AUC:0.97 (central subfield thickness ≥ 250 μm)0.91 (central foveal thickness ≥ 250 μm)0.94 (central subfield thickness ≥ 400 μm)0.96 (central foveal thickness ≥ 400 μm) |
| Nagasawa T et al. [ | 2019 | FP | Treatment-naïve proliferative diabetic retinopathy detection | Local:132 FPs | VGG-16 | Sensitivity: 94.7%Specificity: 97.2%AUC: 0.969 |
| Phan S et al. [ | 2019 | FP | Glaucoma detection | Local:3312 FPs | VGG-19ResNet-152DenseNet-201 | AUCs of 0.9 or more (3 DCNNs) |
| Nagasato D et al. [ | 2019 | FP | Branch retinal vein occlusion detection | Local:466 FPs | VGG-16SVM | Sensitivity: 94.0%Specificity: 97.0%positive predictive value (PPV): 96.5%negative predictive value (NPV): 93.2%AUC: 97.6% |
| Burlina PM et al. [ | 2019 | FP | To develop DL techniques for synthesizing high-resolution realistic fundus images | Local:133,821 FPs | GAN | AUC:0.9706 (model trained on real data) 0.9235 (model trained on synthetic data) |
| Girard F et al. [ | 2019 | FP | Joint segmentation and classification of retinal arteries and veins | Public:DRIVE, 40 FPsMESSIDOR, 1200 FPs | CNN | Accuracy: 94.8% Sensitivity: 93.7% Specificity: 92.9% |
| Coyner AS et al. [ | 2018 | FP | Image quality assessment of fundus images in ROP | Local: 6043 FPs | VGG-19 DCNN | Accuracy: 89.1% AUC: 0.964 |
| Keel S et al. [ | 2018 | FP | Detection of referable diabetic retinopathy and glaucoma | Public:LabelMe, 114,906 FPs (referable DR) | Sensitivity:90% (glaucomatous optic neuropathy) 96% (referable DR) | |
| Sayres R et al. [ | 2018 | FP | Assist grading for DR | Public: EyePACS, 1796 FPs | Inception v-4 | Sensitivity:79.4% (unassisted) 87.5% (grades only) 88.7% (grades plus heatmap) |
| Peng Y et al. [ | 2018 | FP | Automated classification of AMD severity | Public: AREDS, 59302 FPs | DeepSeeNet (Inception v-3) | Accuracy: 0.671 AUC: 0.94 (large drusen) 0.93 (pigmentary abnormalities) 0.97 (late AMD) |
| Guo Y et al. [ | 2018 | FP | Retinal vessel detection | Public: DRIVE, 20 FPs STARE, 20 FPs | Multiple DCNNs | Accuracy: 95.97% (DRIVE training dataset) 96.13% (DRIVE testing dataset) 95.39% (STARE dataset) AUC: 0,9726 (DRIVE training dataset) 0.9737 (DRIVE testing dataset) 0.9539 (STARE dataset) |
| Khojasteh P et al. [ | 2018 | FP | Detection of exudates, microaneurysms and hemorrhages | Public: DIARETDB1, 75 FPs e-Ophtha, 209 FPs | CNN | Accuracy: 97.3% (DIARETDB1 dataset) 86.6% (e-Ophtha) Sensitivity: 0.96 (exudates) 0.84 (hemorrhages) 0.85 (microaneurysms) |
| Gargeya R et al. [ | 2017 | FP | Automated identification of DR | Public: EyePACS, 75,137 FPs MESSIDOR 2, 1748 E-Ophtha, 463 FPs | DCNN | Sensitivity: 94% Specificity: 98% AUC: 0.97 |
| Burlina PM et al. [ | 2017 | FP | Automated grading of AMD | Public: AREDS, more than 130,000 FPs | DCNN | Accuracy: 88.4% (SD, 0.5%)-91.6% (SD, 0.1%) AUC: 0.94 (SD, 0.5%)-0.96 (SD, 0.1%) |
| Ordóñez PF et al. [ | 2017 | FP | To improve the accuracy of microaneurysms detection | Public: Kaggle, 88,702 FPs Messidor, 1200 FPs DiaRerDB1, 89 FPs | Standard CNNVGG CNN | Sensitivity > 91% Specificity > 93% AUC > 93% |
| Takahashi H et al. [ | 2017 | FP | Improving staging of DR | Local: 9939 FPs | GoogleNet DCNN | Prevalence and bias-adjusted Fleiss’kappa (PABAK): 0.64 (modified Davis grading) 0.37 (real prognosis grading) |
| Abbas Q et al. [ | 2017 | FP | Automatic recognition of severity level of DR | Local: 750 FPs | DCNN | Sensitivity: 92.18% Specificity: 94.50% AUC: 0.924 |
| Pfister M et al. [ | 2019 | OCT | Automated segmentation of dermal fillers in OCT images | Local: 100 OCT volume data sets | CNN (U-net-like architecture) | Accuracy: 0.9938 |
| Fu H et al. [ | 2019 | OCT | Automated angle-closure detection | Local: 4135 anterior segment OCT images | CNN | Sensitivity: 0.79 ± 0.037 Specificity: 0.87 ± 0.009 AUC: 0.90 |
| Masood S et al. [ | 2019 | OCT | Automatic choroid layer segmentation from OCT images | Local: 525 OCT images | CNN (Cifar-10 model) | Accuracy: 97% |
| Dos Santos VA et al. [ | 2019 | OCT | Segmentation of cornea OCT scans | Local: 20,160 OCT images | CNN | Accuracy: 99.56% |
| Asaoka R et al. [ | 2019 | OCT | Diagnosis early-onset glaucoma from OCT images | Local: 4316 OCT images | CNN | AUC: 93.7% |
| Lu W et al. [ | 2018 | OCT | Classification of multi-categorical abnormalities from OCT images | Local: 60,407 OCT images | ResNet | Accuracy: 0.959 AUC: 0.984 |
| Schlegl T et al. [ | 2018 | OCT | Detection of macular fluid in OCT images | Local: 1200 OCT scans | CNN | Intraretinal cystoid fluid detection: Accuracy: 0.91 AUC: 0.94 Subretinal fluid detection: Accuracy: 0.61 AUC: 0.92 |
| Prahs P et al. [ | 2018 | OCT | Evaluation of treatment indication with anti-vascular endothelial growth factor medications | Local: 183,402 OCT scans | GoogleNet inception DCNN | Accuracy: 95.5% Sensitivity: 90.1% Specificity: 96.2% AUC: 0.968 |
| Shah A et al. [ | 2018 | OCT | Retinal layer segmentation in OCT images | Local: 3000 OCT scans | CNN | Average computation time: 12.3 s |
| Chan GCY et al. [ | 2018 | OCT | Automated diabetic macular edema classification | Public: Singapore Eye Research Institute, 14,720 OCT scans | AlexNet, VGG, GoogleNet | Accuracy: 93.75% |
| Muhammad H et al. [ | 2017 | OCT | Classification of glaucoma suspects | Local:102 OCT scans | CNN, Random forest | Accuracy: 93.1% (retinal nerve fiber layer) |
| Lee CS et al. [ | 2017 | OCT | Segmentation of macular edema in OCT | Local:1289 OCT images | U-Net CNN | cross-validated Dice coefficient: 0.911 |
| Lee CS et al. [ | 2017 | OCT | Classification of normal and AMD OCT images | Public:Electronic medical records, 101,002 OCT images | VGG-16 | Accuracy: 87.63% AUC: 92.78% |
DL = deep learning; FP = fundus photography; OCT = optical coherence tomography; CNN = convolution neural network; DCNN = deep convolution neural network; DR = diabetic retinopathy; AMD = age-related macular degeneration; AUC = area under the curve