| Literature DB >> 32737379 |
Sebastian M Waldstein1, Philipp Seeböck1, René Donner1, Amir Sadeghipour1, Hrvoje Bogunović1, Aaron Osborne2, Ursula Schmidt-Erfurth3.
Abstract
Artificial intelligence has recently made a disruptive impact in medical imaging by successfully automatizing expert-level diagnostic tasks. However, replicating human-made decisions may inherently be biased by the fallible and dogmatic nature of human experts, in addition to requiring prohibitive amounts of training data. In this paper, we introduce an unsupervised deep learning architecture particularly designed for OCT representations for unbiased, purely data-driven biomarker discovery. We developed artificial intelligence technology that provides biomarker candidates without any restricting input or domain knowledge beyond raw images. Analyzing 54,900 retinal optical coherence tomography (OCT) volume scans of 1094 patients with age-related macular degeneration, we generated a vocabulary of 20 local and global markers capturing characteristic retinal patterns. The resulting markers were validated by linking them with clinical outcomes (visual acuity, lesion activity and retinal morphology) using correlation and machine learning regression. The newly identified features correlated well with specific biomarkers traditionally used in clinical practice (r up to 0.73), and outperformed them in correlating with visual acuity ([Formula: see text] compared to [Formula: see text] for conventional markers), despite representing an enormous compression of OCT imaging data (67 million voxels to 20 features). In addition, our method also discovered hitherto unknown, clinically relevant biomarker candidates. The presented deep learning approach identified known as well as novel medical imaging biomarkers without any prior domain knowledge. Similar approaches may be worthwhile across other medical imaging fields.Entities:
Mesh:
Substances:
Year: 2020 PMID: 32737379 PMCID: PMC7395081 DOI: 10.1038/s41598-020-69814-1
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Flow-chart of the proposed two-level deep learning pipeline. In each step, an auto-encoder learns to encode the input data in a lower-dimensional embedding. First, the local encoder transforms each A-Scan into a 20-dimensional local representation, resulting in 20 2D feature maps. This local representation forms the input of the second stage, the global encoder. The global features provide a compact representation of an entire three-dimensional dataset in only 20 numbers.
Figure 2Representative examples of feature maps obtained by the local embedding. The composites to the right of each column show heatmaps of conventional biomarkers obtained by validated automated image segmentation algorithms[11, 12]. High and low activation of the detected new biomarkers with concomitant visual function are shown side-by-side. Top row: Feature (a5) demonstrates a pronounced negative structure-function correlation, despite a low correspondence to retinal fluid, which is the conventional marker attributed a high relevance for vision. We assume that this biomarker candidate corresponds to subretinal hyperreflective material (arrow). Middle row: Feature (a17) demonstrates the best correlation with markers of exudation as conventionally measured in OCT. An excellent correspondence is for instance observed for intraretinal cystoid fluid (compare the lobulated pattern). Bottom row: Feature (a4) represents a new subclinical biomarker candidate discovered in this work (arrows). The marker does not intrinsically correspond to previously reported clinical entities in OCT images. Remarkably, a positive correlation between the activation of a4 and visual function markers was noted. Color bars indicate the activation level from maxiumum (dark) to minimum (light). IRC, intraretinal cystoid fluid; PED, pigment epithelial detachment; RT, retinal thickness; SRF, subretinal fluid; SHRM, subretinal hyperreflective material.
Figure 3Univariate Pearson correlation coefficients between the 20 identified unsupervised local features (a1–a20) and functional variables as well as measures of disease activity by OCT and fluorescein angiography. Green colour indicates a positive, and blue colour a negative correlation. The level of correlation is colour coded, and the strongest correlation for each variable are shown in boxes. Correlations with no significant difference from 0 are greyed out.
Machine learning prediction of functional and morphological target variables from local and global features.
| Visual function | Optical coherence tomography | Fluorescein angiography | ||||||
|---|---|---|---|---|---|---|---|---|
| BCVA (letter score) | LLVA (letter score) | RT ( | IRC (nl) | SRF (nl) | PED (nl) | Lesion area ( | Leakage area ( | |
| 0.26 | 0.44 | 0.65 | 0.09 | 0.44 | 0.20 | 0.27 | 0.22 | |
| MAE | ||||||||
| 0.29 | 0.46 | 0.64 | 0.19 | 0.27 | 0.28 | 0.21 | 0.15 | |
| MAE | ||||||||
For each outcome variable, the coefficient of determination (R) and mean absolute error (MAE) are shown. BCVA, best-corrected visual acuity; IRC, intraretinal cystoid fluid; LLVA, low luminance visual acuity; nl, nanoliter; PED, pigment epithelial detachment; RT, retinal thickness; SRF, subretinal fluid.