| Literature DB >> 36028647 |
Ziyuan Wang1,2, Srinivas Reddy Sadda1,2, Aaron Lee3, Zhihong Jewel Hu4.
Abstract
Age-related macular degeneration (AMD) and Stargardt disease are the leading causes of blindness for the elderly and young adults respectively. Geographic atrophy (GA) of AMD and Stargardt atrophy are their end-stage outcomes. Efficient methods for segmentation and quantification of these atrophic lesions are critical for clinical research. In this study, we developed a deep convolutional neural network (CNN) with a trainable self-attended mechanism for accurate GA and Stargardt atrophy segmentation. Compared with traditional post-hoc attention mechanisms which can only visualize CNN features, our self-attended mechanism is embedded in a fully convolutional network and directly involved in training the CNN to actively attend key features for enhanced algorithm performance. We applied the self-attended CNN on the segmentation of AMD and Stargardt atrophic lesions on fundus autofluorescence (FAF) images. Compared with a preexisting regular fully convolutional network (the U-Net), our self-attended CNN achieved 10.6% higher Dice coefficient and 17% higher IoU (intersection over union) for AMD GA segmentation, and a 22% higher Dice coefficient and a 32% higher IoU for Stargardt atrophy segmentation. With longitudinal image data having over a longer time, the developed self-attended mechanism can also be applied on the visual discovery of early AMD and Stargardt features.Entities:
Mesh:
Year: 2022 PMID: 36028647 PMCID: PMC9418226 DOI: 10.1038/s41598-022-18785-6
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Overview of the entire atrophy segmentation and prediction system using regular U-Net and self-attended U-Net with AMD data as an example.
Figure 2Illustration of longitudinal image and label alignment for AMD and Stargardt data. Note the hypo-fluorescence regions on the FAF images are AMD atrophic lesions (i.e., GA) (upper row) and Stargardt atrophic lesions (bottom row) respectively.
Figure 3Illustration of the self-attended deep CNN mechanism/architecture.
Performance results of automated segmentation and prediction of atrophic lesions for eyes with AMD and Stargardt with 95% CIs.
| CNN | Disease | Visit | Dice | IoU | Accuracy | Sensitivity | Specificity |
|---|---|---|---|---|---|---|---|
| U-Net | AMD | Month0 | 0.77 ± 0.05 | 0.66 ± 0.05 | 0.96 ± 0.01 | 0.78 ± 0.05 | 0.98 ± 0.01 |
| Self-attended U-Net | AMD | Month0 | 0.85 ± 0.04 | 0.77 ± 0.05 | 0.98 ± 0.00 | 0.85 ± 0.04 | 0.99 ± 0.00 |
| Self-attended U-Net | AMD | Month12 | 0.78 ± 0.05 | 0.68 ± 0.05 | 0.95 ± 0.01 | 0.75 ± 0.05 | 0.98 ± 0.00 |
| U-Net | Stargardt | Month0 | 0.65 ± 0.03 | 0.52 ± 0.03 | 0.90 ± 0.01 | 0.54 ± 0.04 | 0.99 ± 0.00 |
| Self-attended U-Net | Stargardt | Month0 | 0.79 ± 0.03 | 0.69 ± 0.03 | 0.95 ± 0.01 | 0.73 ± 0.03 | 0.99 ± 0.00 |
| Self-attended U-Net | Stargardt | Month12 | 0.76 ± 0.04 | 0.64 ± 0.04 | 0.94 ± 0.02 | 0.68 ± 0.04 | 0.99 ± 0.01 |
Percent differences and Mann–Whitney test results between different CNNs for different diseases.
| CNN 1 | Disease 1 vs disease 2 | Visit | Dice ( | IoU ( | Accuracy ( | Sensitivity ( | Specificity ( |
|---|---|---|---|---|---|---|---|
| U-Net vs self-attended U-Net | AMD vs AMD | Month0 | 10.6% ± 1.3% ( | 17% ± 0.0% ( | 2.1% ± 0.0% ( | 9.4% ± 1.3% ( | 1.2% ± 1.0% ( |
| U-Net vs self-attended U-Net | Stargardt vs Stargardt | Month0 | 22.0% ± 0.0% ( | 32% ± 0.0% ( | 5.4% ± 0.0% ( | 35.5% ± 1.9% ( | −0.1% ± 0.0% ( |
| Self-attended U-Net vs self-attended U-Net | Stargardt vs AMD | Month0 | 7.3% ± 1.3% ( | 13% ± 2.9% ( | 2.9% ± 1.0% ( | 16.6% ± 1.4% ( | −0.2% ± 0.0% ( |
| Self-attended U-Net vs self-attended U-Net | Stargardt vs AMD | Month12 | 1.9% ± 1.3% ( | 6% ± 1.6% ( | 1.4% ± 1.1% ( | 10.2% ± 1.5% ( | −0.6% ± 1.0% ( |
Percent difference defined as (CNN2 − CNN1)/CNN1.
Mann–Whitney test results are indicated by the p-values.
Figure 4Illustration of atrophy segmentation results of the self-attended U-Net and the regular U-Net on baseline images with a representative performance.
Figure 5Illustration of the reconstruction and self-attended maps based on the baseline atrophy segmentation in the last downsampling layer.
Figure 6Illustration of the atrophy progression prediction results based on baseline images and Month 12 ground truth.