| Literature DB >> 36078875 |
Alicia Pareja-Ríos1, Sabato Ceruso2, Pedro Romero-Aroca3, Sergio Bonaque-González4.
Abstract
We report the development of a deep learning algorithm (AI) to detect signs of diabetic retinopathy (DR) from fundus images. For this, we use a ResNet-50 neural network with a double resolution, the addition of Squeeze-Excitation blocks, pre-trained in ImageNet, and trained for 50 epochs using the Adam optimizer. The AI-based algorithm not only classifies an image as pathological or not but also detects and highlights those signs that allow DR to be identified. For development, we have used a database of about half a million images classified in a real clinical environment by family doctors (FDs), ophthalmologists, or both. The AI was able to detect more than 95% of cases worse than mild DR and had 70% fewer misclassifications of healthy cases than FDs. In addition, the AI was able to detect DR signs in 1258 patients before they were detected by FDs, representing 7.9% of the total number of DR patients detected by the FDs. These results suggest that AI is at least comparable to the evaluation of FDs. We suggest that it may be useful to use signaling tools such as an aid to diagnosis rather than an AI as a stand-alone tool.Entities:
Keywords: artificial intelligence; deep learning; diabetic retinopathy; tele-ophthalmology
Year: 2022 PMID: 36078875 PMCID: PMC9456446 DOI: 10.3390/jcm11174945
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.964
Figure 1Architecture of the neural network. Only one block per each sequence is showed.
Figure 2Distribution of visits for the screening of diabetic retinopathy (DR). Right: development set. Left: test set divided in 6 segments: A: visits evaluated only by family doctors and labeled as “no DR”. B: visits evaluated by at least one family doctor and labeled as “DR”. C: visits evaluated by at least one family doctor and labeled as doubtful or non-evaluated. D: visits evaluated by at least one ophthalmologist and labeled as “no DR”. E: visits evaluated by at least one ophthalmologist and labeled as “mild DR” (MiDR). F: visits evaluated by at least one ophthalmologist and labeled as “moderate DR” or worse.
Figure 3Multiscale activation map extraction. Three activation maps are extracted at different scales and combined to form an accurate result.
Evaluation by family doctors (FD), ophthalmologists (OPH) and artificial intelligence over segments of Figure 2. DR: diabetic retinopathy. MiDR: mild diabetic retinopathy.
| Segments | Family Doctor (FD) | Ophthalmologist (OPH) | Artificial Intelligence | |||||
|---|---|---|---|---|---|---|---|---|
| No DR | DR | Doubtful or Non-Evaluated | No DR | MiDR | >MiDR | No DR | DR | |
| (A) No DR label by FD | 149,987 | 0 | 0 | - | - | - | 134,880 | 15,107 |
| (B) DR label by FD | 0 | 43,681 | 0 | 27,319 | 10,480 | 5882 | 22,834 | 20,847 |
| (C) Doubtful or non-evaluated by FD | 0 | 0 | 23,138 | 20,876 | 4253 | 3009 | 17,198 | 10,940 |
| (D) No DR label by OPH | 0 | 27,319 | 20,876 | 48,195 | 0 | 0 | 35,104 | 13,091 |
| (E) MiDR label by OPH | 0 | 10,480 | 4253 | 0 | 14,733 | 0 | 4485 | 10,248 |
| (F) >MiDR label by OPH | 0 | 5882 | 3009 | 0 | 0 | 8891 | 443 | 8448 |
Figure 4Number of diabetic retinopathy cases detected by the artificial intelligence vs. gold standard (ophthalmologists) for segments B and C.
Figure 5First column: input retinography. Second column: multiscale activation maps. Third column: automatic selection of areas of interest based on activation maps.
Figure 6Fundus image with mild DR. A small sign of DR can be seen on the left side of the eye highlighted with a blue rectangle.