| Literature DB >> 35882966 |
Emanuele Crincoli1,2,3, Zhanlin Zhao1, Giuseppe Querques4, Riccardo Sacconi4, Matteo Maria Carlà2,3, Federico Giannuzzi2,3, Silvia Ferrara2,3, Nicolò Ribarich4, Gaia L'Abbate4, Stanislao Rizzo2,3, Eric H Souied1,5, Alexandra Miere6.
Abstract
Initial stages of Best vitelliform macular dystrophy (BVMD) and adult vitelliform macular dystrophy (AVMD) harbor similar blue autofluorescence (BAF) and optical coherence tomography (OCT) features. Nevertheless, BVMD is characterized by a worse final stage visual acuity (VA) and an earlier onset of critical VA loss. Currently, differential diagnosis requires an invasive and time-consuming process including genetic testing, electrooculography (EOG), full field electroretinogram (ERG), and visual field testing. The aim of our study was to automatically classify OCT and BAF images from stage II BVMD and AVMD eyes using a deep learning algorithm and to identify an image processing method to facilitate human-based clinical diagnosis based on non-invasive tests like BAF and OCT without the use of machine-learning technology. After the application of a customized image processing method, OCT images were characterized by a dark appearance of the vitelliform deposit in the case of BVMD and a lighter inhomogeneous appearance in the case of AVMD. By contrast, a customized method for processing of BAF images revealed that BVMD and AVMD were characterized respectively by the presence or absence of a hypo-autofluorescent region of retina encircling the central hyperautofluorescent foveal lesion. The human-based evaluation of both BAF and OCT images showed significantly higher correspondence to ground truth reference when performed on processed images. The deep learning classifiers based on BAF and OCT images showed around 90% accuracy of classification with both processed and unprocessed images, which was significantly higher than human performance on both processed and unprocessed images. The ability to differentiate between the two entities without recurring to invasive and expensive tests may offer a valuable clinical tool in the management of the two diseases.Entities:
Mesh:
Substances:
Year: 2022 PMID: 35882966 PMCID: PMC9325755 DOI: 10.1038/s41598-022-16980-z
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1OCT (left quadrant) and BAF (right quadrant) images before (first row) and after (second row) image processing. BAF blue autofluorescence, OCT optical coherence tomography.
Classification matrices describing the performances on BAF images of human-based methods on both unprocessed (UP) and processed (P) images and CNN-based methods on both UP and P images.
| BAF images | UP human-based | ||
|---|---|---|---|
| Positive | Negative | ||
| Ground truth | Positive | 45 | 24 |
| Negative | 19 | 38 | |
BAF blue autofluorescence, CNN convolutional neural network.
Figure 2ROC curves illustrating accuracy of the 4 classification methods on BAF images. BAF blue autofluorescence, CNN convolutional neural network, P-BAF processed BAF images, UP-BAF unprocessed BAF images.
Classification matrices describing the performances on OCT images of human-based methods on both unprocessed (UP) and processed (P) images and CNN-based methods on both UP and P images.
| OCT images | UP human-based | ||
|---|---|---|---|
| Positive | Negative | ||
| Ground truth | Positive | 46 | 23 |
| Negative | 18 | 39 | |
CNN convolutional neural network, OCT optical coherence tomography.
Figure 3ROC curves illustrating accuracy of the 4 classification methods on OCT images. CNN convolutional neural network; OCT optical coherence tomography, P-OCT processed OCT images, UP-OCT unprocessed OCT images.
Figure 4GradCAM output highlighting relevant features for each of the 4 deep learning classifiers. Upper left: deep learning classifier for unprocessed OCT images; Upper right: deep learning classifier for unprocessed BAF images; Lower left: deep learning classifier for OCT processed images; Lower right: deep learning classifier for BAF processed images. BAF blue autofluorescence, OCT optical coherence tomography.
Comparison of performances (AUROCs) of the 4 different methods on BAF images (first row) and OCT images (second row).
| UP hb | P hb | UP CNN | P CNN | ||
|---|---|---|---|---|---|
| BAF images | 0.614 (CI 0.557–0.672) | 0.785(CI 0.768–0.810) | 0.861(CI 0.843–0.879) | 0.880 (CI 0.862–0.896) | < 0.001 |
| OCT images | 0.662 (CI 0.657–0.684) | 0.741 (CI 0.718–0.757) | 0.867(CI 0.853–0.881) | 0.893(CI 0.882–0.911) | < 0.001 |
The analysis responds to the question “which method performed better in differential diagnosis based on BAF/OCT images?”. BAF blue autofluorescence, CNN convolutional neural network, OCT optical coherence tomography, P CNN CNN classification of unprocessed images, P hb human-based classification of processed images, UP CNN CNN classification of unprocessed images, UP hb human-based classification of unprocessed images.
Comparison of classification accuracy using either BAF or OCT images for each of the 4 described methods.
| BAF images | OCT images | ||
|---|---|---|---|
| UP hb | 0.614 (CI 0.557–0.672) | 0.662 (CI 0.657–0.684) | 0.031 |
| P hb | 0.785 (CI 0.768–0.810) | 0.741 (CI 0.718–0.757) | 0.025 |
| UP CNN | 0.861 (CI 0.843–0.879) | 0.867 (CI 0.853–0.881) | 0.652 |
| P CNN | 0.880 (CI 0.862–0.896) | 0.893 (CI 0.882–0.911) | 0.790 |
The analysis responds to the question “Does UPhb/Phb/UP CNN/P CNN perform better when applied on BAF or OCT images?”. BAF blue autofluorescence, CNN convolutional neural network, OCT optical coherence tomography, P CNN CNN classification of unprocessed images, P hb human-based classification of processed images, UP CNN CNN classification of unprocessed images, UP hb human-based classification of unprocessed images.