| Literature DB >> 34702997 |
Li Lu1, Enliang Zhou2, Wangshu Yu1, Bin Chen3, Peifang Ren1, Qianyi Lu4, Dian Qin5, Lixian Lu5, Qin He1, Xuyuan Tang1, Miaomiao Zhu1, Li Wang6, Wei Han7.
Abstract
Globally, cases of myopia have reached epidemic levels. High myopia and pathological myopia (PM) are the leading cause of visual impairment and blindness in China, demanding a large volume of myopia screening tasks to control the rapid growing myopic prevalence. It is desirable to develop the automatically intelligent system to facilitate these time- and labor- consuming tasks. In this study, we designed a series of deep learning systems to detect PM and myopic macular lesions according to a recent international photographic classification system (META-PM) classification based on color fundus images. Notably, our systems recorded robust performance both in the test and external validation dataset. The performance was comparable to the general ophthalmologist and retinal specialist. With the extensive adoption of this technology, effective mass screening for myopic population will become feasible on a national scale.Entities:
Mesh:
Year: 2021 PMID: 34702997 PMCID: PMC8548495 DOI: 10.1038/s42003-021-02758-y
Source DB: PubMed Journal: Commun Biol ISSN: 2399-3642
Fig. 1Workflow diagram showing the overview of developing deep learning systems to detect PM as well as myopic maculopathy.
PM pathologic myopia, NPM none pathologic myopia, DLS deep learning system. *20 graders were randomly grouped into five teams with each team involving three general ophthalmologists and one senior specialist.
Study population characteristics of the total dataset and external validation dataset.
| Number of images with labels | Number of participants | Mean age (years) | Sex (% female) | Spherical equivalent (diopters) | |
|---|---|---|---|---|---|
| Total dataset | 17,330 | 13,869 | 49.5 | 66.1 | −2.6 ± 4.79a |
| Ungradable images | 902 | 881 | 48.8 | 66.6 | NA |
| none PM | 14,623 | 11,698 | 49.2 | 65.9 | −1.294 ± 2.39 |
| Pathologic myopia | 1805 | 1290 | 52.7 | 67.5 | −14.469 ± 4.84 |
| Category 0 | 693 | 645 | 50.6 | 61.2 | −7.23 ± 0.22 |
| Category 1 | 1581 | 1089 | 48.9 | 66.3 | −11.38 ± 2.75 |
| Category 2 | 480 | 338 | 49.7 | 67.5 | −14.06 ± 4.26 |
| Category 3 | 451 | 334 | 55.9 | 68.9 | −16.26 ± 5.24 |
| Category 4 | 331 | 188 | 61.3 | 67 | −16.87 ± 5.78 |
| External validation dataset | 1000 | 738 | 51.5 | 63.4 | −3.07 ± 5.80a |
| Ungradable images | 63 | 59 | 59.0 | 63.9 | NA |
| none PM | 800 | 602 | 50.5 | 64.2 | −1.5 ± 3.42 |
| Pathologic myopia | 137 | 77 | 53.6 | 55.8 | −15.35 ± 5.98 |
| Category 0 | 35 | 31 | 52.1 | 64.1 | −6.73 ± 0.24 |
| Category 1 | 121 | 78 | 49.8 | 61.6 | −12.35 ± 4.98 |
| Category 2 | 32 | 17 | 50.5 | 62.7 | −15.75 ± 5.71 |
| Category 3 | 33 | 18 | 56.2 | 65.1 | −17.03 ± 5.96 |
| Category 4 | 23 | 13 | 62.4 | 66.2 | −17.26 ± 6.21 |
Abbreviations: PM pathologic myopia.
anot include the refractive error data of ungradable image group.
Classification results for binary task in test dataset.
| AUC (95% CI) | Accuracy (95% CI) | Specificity (95% CI) | Sensitivity (95% CI) | |
|---|---|---|---|---|
| DLS | 0.993 (0.989 to 0.997) | 97.7% (97.0 to 98.4) | 97.2% (96.2 to 98.0) | 97.7% (97.0 to 98.5) |
| General ophthalmologista | – | 97.8% (97.1 to 98.5) | 96.7% (95.8 to 97.6) | 98.0% (97.3 to 98.6) |
| Retinal specialista | – | 99.1% (98.6 to 99.6) | 98.9% (98.4 to 99.4) | 99.1% (98.7 to 99.6) |
Abbreviations: DLS deep learning system, AUC area under the receiver operating curve.
aThe external ophthalmologist and retinal specialist.
Fig. 2Receiver operating characteristic (ROC) curves of the deep learning systems derived from the test datasets.
a The performance for binary task. b The performance for three-class task. c The performance for five-class task. NPM none pathologic myopia, PM pathologic myopia, AUC area under the receiver operating curve, C Category.
Classification results for multiclass tasks in test dataset.
| Macro-AUC | Accuracy(95% CI) | Quadratic-weighted kappa(95% CI) | |
|---|---|---|---|
| Task of ungradable/NPM/PM | |||
| DLS | 0.979 | 96.3% (95.1 to 97.5) | 0.787 (0.737 to 0.837) |
| General ophthalmologist | – | 98.4% (97.6 to 99.2) | 0.962 (0.940 to 0.979) |
| Retinal specialist | – | 99.2% (98.6 to 99.8) | 0.981 (0.969 to 0.994) |
| Task of 5 myopic maculopathy categories | |||
| DLS | 0.978 | 97.6% (96.8 to 98.3) | 0.990 (0.985 to 0.994) |
| Category 0 | – | 98.8% | – |
| Category 1 | – | 99.3% | – |
| Category 2 | – | 93.7% | – |
| Category 3 | – | 95.5% | – |
| Category 4 | – | 93.9% | – |
| General ophthalmologist | – | 95.4% (94.3 to 96.4) | 0.966 (0.957 to 0.974) |
| Category 0 | – | 97.7% | – |
| Category 1 | – | 98.1% | – |
| Category 2 | – | 91.6% | – |
| Category 3 | – | 88.8% | – |
| Category 4 | – | 90.9% | – |
| Retinal specialist | – | 98.9% (98.3 to 99.4) | 0.991 (0.986 to 0.995) |
| Category 0 | – | 100% | – |
| Category 1 | – | 99.3% | – |
| Category 2 | – | 97.9% | – |
| Category 3 | – | 97.7% | – |
| Category 4 | – | 96.9% | – |
Abbreviations: DLS deep learning system, PM pathologic myopia, NPM none pathologic myopia.
Fig. 3The typical misclassified cases of DLSs.
Typical images of false-negative images in binary task: a PM with retinal detachment. b PM with retinal vein obstruction. Typical images of false-positive images in binary task: c Tessellated fundus. d Retinal vein occlusion. e Exudative retinopathy. f Proliferative retinopathy. The major error cases of three-class task: g images with the relatively poor clarity. The major error cases of five-class task: h patchy chorioretinal atrophy image were classified as macular atrophy image. i macular atrophy image were classified as patchy chorioretinal atrophy image.
Fig. 4Visualization of the DLS for five-class task.
a The original images of different myopic maculopathy (Category1–Category4). b Heatmap generated from deep features overlaid on the original images. The typical myopic maculopathy lesions were observed in the hot regions.