| Literature DB >> 35962007 |
Rahul M Dhodapkar1, Emily Li2, Kristen Nwanyanwu1, Ron Adelman1, Smita Krishnaswamy3,4, Jay C Wang5.
Abstract
Optical coherence tomography angiography (OCTA) is an emerging non-invasive technique for imaging the retinal vasculature. While there are many promising clinical applications for OCTA, determination of image quality remains a challenge. We developed a deep learning-based system using a ResNet152 neural network classifier, pretrained using ImageNet, to classify images of the superficial capillary plexus in 347 scans from 134 patients. Images were also manually graded by two independent graders as a ground truth for the supervised learning models. Because requirements for image quality may vary depending on the clinical or research setting, two models were trained-one to identify high-quality images and one to identify low-quality images. Our neural network models demonstrated outstanding area under the curve (AUC) metrics for both low quality image identification (AUC = 0.99, 95%CI 0.98-1.00, [Formula: see text] = 0.90) and high quality image identification (AUC = 0.97, 95%CI 0.96-0.99, [Formula: see text] = 0.81), significantly outperforming machine-reported signal strength (AUC = 0.82, 95%CI 0.77-0.86, [Formula: see text]= 0.52 and AUC = 0.78, 95%CI 0.73-0.83, [Formula: see text] = 0.27 respectively). Our study demonstrates that techniques from machine learning may be used to develop flexible and robust methods for quality control of OCTA images.Entities:
Mesh:
Year: 2022 PMID: 35962007 PMCID: PMC9374672 DOI: 10.1038/s41598-022-17709-8
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Representative examples of 8 × 8 mm OCTA images of the superficial capillary plexus with gradability score of 2 (A,B), 1 (C,D), and 0 (E,F). Image artifacts displayed include blink lines (arrowheads), segmentation artifact (asterisks), and media opacity (arrows). Image (E) is also decentered.
Figure 3ResNet transfer learning achieves significant improvements in identification of image quality in both low quality and high quality use cases over machine-reported signal strength. (A) Simplified architecture diagrams for the pretrained (i) ResNet152 and (ii) AlexNet architectures employed. (B) Training history and receiver operating characteristic curves for ResNet152 versus machine-reported signal strength and AlexNet for low quality criteria. (C) Training history and receiver operating characteristic curves for ResNet152 versus machine-reported signal strength and AlexNet for high quality criteria.
Patient characteristics for OCTA images show that patient age and diabetic retinopathy status is significantly associated with image quality, but sex and ethnicity are not.
| All images | Low quality images | High quality images | p-value | |
|---|---|---|---|---|
| 347 | 189 | 80 | ||
| < 0.001 | ||||
| 18-44 | 68 (19.6%) | 23 (12.2 %) | 27 (33.8%) | |
| 45-64 | 165 (47.6%) | 92 (48.7%) | 35 (43.8%) | |
| 65+ | 114 (32.9%) | 74 (39.2%) | 18 (22.5%) | |
| 0.52 | ||||
| Male | 136 (39.2%) | 71 (37.6%) | 36 (45.0%) | |
| Female | 211 (60.8%) | 118 (62.4%) | 44 (55.0%) | |
| 0.37 | ||||
| Caucasian | 107 (30.8%) | 59 (31.2%) | 29 (36.3%) | |
| Black | 113 (32.6%) | 59 (31.2%) | 19 (23.8%) | |
| Hispanic | 107 (30.8%) | 64 (33.9%) | 23 (28.8%) | |
| Asian | 14 (4.0%) | 5 (2.6%) | 7 (8.8%) | |
| Other | 6 (1.7%) | 2 (1.1%) | 2 (2.5%) | |
| 0.017 | ||||
| None | 105 (30.8%) | 50 (27.3%) | 27 (33.8%) | |
| NPDR | 139 (40.8%) | 62 (33.9%) | 38 (47.5%) | |
| PDR | 97 (28.4%) | 71 (38.8%) | 15 (18.8%) |
P-values were calculated using the chi-square statistic.
DR diabetic retinopathy, NPDR nonproliferative diabetic retinopathy, PDR proliferative diabetic retinopathy.
* Diabetic retinopathy status could not be accurately determined for 6 images, which have been omitted.
Figure 2Plot showing gradability scores for each independent grader and comparison with machine-reported signal strength. (A) The sum of gradability scores was used to create a total gradability score. Images with total gradability score of 4 were designated as high quality, and those with total gradability score of less than or equal to 1 were designated as low quality. (B) Signal strength is correlated with manual gradability score, but images of high signal strength may still be of low quality. Red dashed line indicates manufacturer recommended signal strength based quality cutoff (signal strength 6).
Validation of low quality and high quality ResNet models trained on 8 8 mm images against 8 8 mm and 6 6 mm OCTA superficial slab images shows robust quality assessment.
| 8 | 6 | |
|---|---|---|
Low quality ResNet (trained on low quality 8 | AUC = 0.99, 95% CI [0.98–1.00] Accuracy = 95.3% Cohen’s Kappa = 0.90 | AUC = 0.83, 95% CI [0.69–0.98] Accuracy = 87.5% Cohen’s Kappa = 0.7 |
Low quality CNN (trained on low quality 8 | AUC = 0.97, 95% CI [0.95–0.98] Accuracy = 91 % Cohen’s Kappa = 0.82 | AUC = 0.79, 95% CI [0.63–0.96] Accuracy = 68.8% Cohen’s Kappa = 38.2 |
High quality ResNet (trained on high quality 8 | AUC = 0.97, 95% CI [0.96–0.99] Accuracy = 93.5% Cohen’s Kappa = 0.81 | AUC = 0.85, 95% CI [0.55–1.00] Accuracy = 71.9% Cohen’s Kappa = 0.41 |
High quality CNN (trained on high quality 8 | AUC = 0.94, 95% CI [0.91–0.97] Accuracy = 90.1% Cohen’s Kappa = 0.71 | AUC = 0.67, 95% CI [0.19–1.00] Accuracy = 71.8% Cohen’s Kappa = 0.41 |
AUC area under the curve.
Figure 4Class activation maps for ResNet152 and AlexNet models highlight features associated with image quality. (A) Class activation maps show coherent activation following the superficial retinal vasculature in the 8 8 mm validation images and (B) to a lesser degree in 6 6 mm validation images. LQ model trained with low-quality criteria, HQ model trained with high-quality criteria.