| Literature DB >> 32704417 |
Ryan T Yanagihara1, Cecilia S Lee1, Daniel Shu Wei Ting2,3, Aaron Y Lee1,4.
Abstract
Artificial intelligence (AI)-based automated classification and segmentation of optical coherence tomography (OCT) features have become increasingly popular. However, its 3-dimensional volumetric nature has made developing an algorithm that generalizes across all patient populations and OCT devices challenging. Several recent studies have reported high diagnostic performances of AI models; however, significant methodological challenges still exist in applying these models in real-world clinical practice. Lack of large-image datasets from multiple OCT devices, nonstandardized imaging or post-processing protocols between devices, limited graphics processing unit capabilities for exploiting 3-dimensional features, and inconsistency in the reporting metrics are major hurdles in enabling AI for OCT analyses. We discuss these issues and present possible solutions. Copyright 2020 The Authors.Entities:
Keywords: artificial intelligence; deep learning; optical coherence tomography
Mesh:
Year: 2020 PMID: 32704417 PMCID: PMC7347025 DOI: 10.1167/tvst.9.2.11
Source DB: PubMed Journal: Transl Vis Sci Technol ISSN: 2164-2591 Impact factor: 3.048
Figure.Artificial intelligence (AI)-generated prediction optical coherence tomography (OCT) image overestimates treatment response in treatment-naive neovascular age-related macular degeneration patients 3 months after monthly injections (loading dose) of anti–vascular endothelial growth factor. In case 1, subretinal fluid (SRF) at baseline (A) resolves after treatment at month 3 per ground truth OCT image (B), which the AI-generated OCT image correctly predicts (C). In case 2, SRF at baseline (D) improves but persists at postinjection month 3 (E); however, AI-generated prediction (F) incorrectly assumes complete resolution of SRF.