| Literature DB >> 32587159 |
Sreetama Dutt1, Anand Sivaraman1, Florian Savoy2, Ramachandran Rajalakshmi3.
Abstract
Artificial intelligence (AI) in healthcare is the use of computer-algorithms in analyzing complex medical data to detect associations and provide diagnostic support outputs. AI and deep learning (DL) find obvious applications in fields like ophthalmology wherein huge amount of image-based data need to be analyzed; however, the outcomes related to image recognition are reasonably well-defined. AI and DL have found important roles in ophthalmology in early screening and detection of conditions such as diabetic retinopathy (DR), age-related macular degeneration (ARMD), retinopathy of prematurity (ROP), glaucoma, and other ocular disorders, being successful inroads as far as early screening and diagnosis are concerned and appear promising with advantages of high-screening accuracy, consistency, and scalability. AI algorithms need equally skilled manpower, trained optometrists/ophthalmologists (annotators) to provide accurate ground truth for training the images. The basis of diagnoses made by AI algorithms is mechanical, and some amount of human intervention is necessary for further interpretations. This review was conducted after tracing the history of AI in ophthalmology across multiple research databases and aims to summarise the journey of AI in ophthalmology so far, making a close observation of most of the crucial studies conducted. This article further aims to highlight the potential impact of AI in ophthalmology, the pitfalls, and how to optimally use it to the maximum benefits of the ophthalmologists, the healthcare systems and the patients, alike.Entities:
Keywords: Age-related macular degeneration; anterior-segment diseases; artificial intelligence; cataract; deep learning; diabetic retinopathy; glaucoma; machine learning; ophthalmology; retinopathy of prematurity
Mesh:
Year: 2020 PMID: 32587159 PMCID: PMC7574057 DOI: 10.4103/ijo.IJO_1754_19
Source DB: PubMed Journal: Indian J Ophthalmol ISSN: 0301-4738 Impact factor: 1.848
A review of the performance of various artificial intelligence algorithms validated in prospective as well as retrospective studies in the detection of referable diabetic retinopathy (RDR) using fundus images
| Study (Authors) | Type of Study | Camera/AI Algorithm | Dataset | Sensitivity (%) | Specificity (%) |
|---|---|---|---|---|---|
| Rajalakshmi | Retrospective | Remidio, Fundus on Phone (FOP)/EyeArt | Internally generated dataset | 99.3 | 66.8 |
| Abràmoff | Retrospective | Topcon TRC NW6 nonmydriatic fundus camera/IDx-DR X2 | MESSIDOR-2 | 96.8 | 87 |
| Gulshan | Retrospective | Topcon TRC NW6 nonmydriatic camera/inception-V3 | MESSIDOR-2 | 87 | 98.5 |
| Gulshan | Retrospective | EyePACS-1 | 90.3 | 98.1 | |
| Ting | Retrospective | FundusVue, Canon, Topcon, and Carl Zeiss/VCG-19 | SiDRP 14-15 | 90.5 | 91.6 |
| Guangdong | 98.7 | 81.6 | |||
| SIMES | 97.1 | 82.0 | |||
| SINDI | 99.3 | 73.3 | |||
| SCES | 100 | 76.3 | |||
| BES | 94.4 | 88.5 | |||
| AFEDS | 98.8 | 86.5 | |||
| RVEEH | 98.9 | 92.2 | |||
| Mexican | 91.8 | 84.8 | |||
| CUHK | 99.3 | 83.1 | |||
| HKU | 100 | 81.3 | |||
| Ramachandran | Retrospective | ‘Canon CR-2 Plus Digital Nonmydriatic Retinal Camera (Canon Inc., Melville, New York, USA)/Visiona | ODEMS | 84.6 | 79.7 |
| Ramachandran | Retrospective | ‘Canon CR-2 Plus Digital Nonmydriatic Retinal Camera (Canon Inc., Melville, New York, USA)/Visiona | Messidor | 96 | 90 |
| Natarajan | Prospective | Remidio Nonmydriatic Fundus on Phone (NM FOP 10)/MediosAI | Internal dataset generated | 100 | 88.4 |
| Sosale | Prospective | Remidio Nonmydriatic Fundus on/Medios AI Phone (NM FOP 10) | Internal dataset generated | 98.8 | 86.7 |
Figure 1(a) Interface for the inbuilt, automated, offline AI-algorithm, Medios-AI integrated into fundus on phone (FOP) to provide instant DR diagnosis. (b) Sample report generated showing heat maps highlighting DR lesions
A review of the performance of various Artificial Intelligence algorithms tested for detection of Age-related Macular Degeneration (ARMD)
| Study (Authors)/Image Used | AI Algorithm/Dataset | AI Utility | Sensitivity (%) | Specificity (%) |
|---|---|---|---|---|
| Burlina | DCNN-A WS/National Institutes of Health AREDS | Detecting the presence of AMD from the dataset and differentiating from normal images | 88.4 | 94.1 |
| DCNN-U WS | 73.5 | 91.8 | ||
| DCNN-A NSG | 87.2 | 93.4 | ||
| DCNN-U NSG | 73.8 | 92.1 | ||
| DCNN-A NS | 85.7 | 93.4 | ||
| DCNN-U NS | 72.8 | 91.5 | ||
| Lee | Modified VGG16/Heidelberg Spectralis (Heidelberg Engineering, Heidelberg, Germany) imaging database | Detecting the presence of AMD from the dataset and differentiating from normal images | 92.6 | 93.7 |
| Treder | DCNN (using open-source deep learning framework TensorFlowÔ (Google Inc., Mountain View, CA, USA))/ImageNet | Detecting the presence of AMD from the dataset and differentiating from normal images | 100 | 92 |
| Sengupta | Transfer Learning/Privately generated dataset with 51140 normal, 8617 drusens, 37206 CNV, 11349 DME images | Differentiating AMD/DME images from the dataset consisting of all conditions causing treatable blindness | 97.8 | 97.4 |
| Sengupta | DCNN/AREDS | 66.34 | 88.95 | |
| DCNN/Tsukazaki Hospital database | 100 | 97.31 | ||
| CNN/Kasturba Medical College database | 96.43 | 93.45 | ||
| Hwang | VGG16/Internally generated database with 35,900 images | Identify Normal images without AMD | 99.07 | 99.54 |
| Identify Dry AMD | 83.99 | 99.34 | ||
| Identify inactive Wet AMD | 96.07 | 90.40 | ||
| Identify Active Wet AMD | 86.47 | 99.05 | ||
| Inception V3 | Identify normal images without AMD | 99.38 | 99.70 | |
| Identify Dry AMD | 85.64 | 99.57 | ||
| Identify Inactive Wet AMD | 97.11 | 91.82 | ||
| Identify Active Wet AMD | 88.53 | 98.99 | ||
| ResNet50 | Identify Normal images without AMD | 99.17 | 99.80 | |
| Identify Dry AMD | 81.20 | 99.45 | ||
| Identify Inactive Wet AMD | 95.35 | 90.24 | ||
| Identify Active Wet AMD | 87.19 | 97.84 |
A review of the performance of various artificial intelligence algorithms tested for detection of Retinopathy of Prematurity (ROP)
| Study (Authors) | Image | AI Algorithm/Dataset | Sensitivity (%) | Specificity (%) |
|---|---|---|---|---|
| Worrall | Fundus images | Bayesian CNN (per image)/Canada | 82.5 | 98.3 |
| Bayesian CNN (per exam) | 95.4 | 94.7 | ||
| Zhang | Wide-angle retinal images | AlexNet/Private dataset with 420 365 wide-angle retina images | 72.9 | 78.7 |
| VGG-16 | 98.7 | 97.8 | ||
| GoogleNet | 96.8 | 98.2 |
A review of the performance of various artificial intelligence algorithms tested for detection of Glaucoma
| Study (Authors) | Image | AI Algorithm/Dataset | Sensitivity (%) | Specificity (%) |
|---|---|---|---|---|
| Sengupta | Fundus image | DENet/SECS, SINDI | 70.67, 37.53 | |
| Inception V3/Private database with 48000+ images | 95.6 | 92 | ||
| MB-NN/Private database | 92.33 | 90.9 | ||
| OCT Images | MCDN/Private database | 88.89 | 89.63 | |
| Yousefi | OCT Images | The algorithm developed combining Bayesian net, Lazy K Star, Meta classification using regression, Meta ensemble selection, alternating decision tree (AD tree), random forest tree, and simple classification and regression tree (CART)/Privately generated from University of California at San Diego (UCSD)-based diagnostic innovations in glaucoma study (DIGS) and the African Descent and Glaucoma Evaluation Study (ADAGES), assessed RNFL thickness | 80.0 | 73.0 |