| Literature DB >> 35941613 |
Daoliang Xu1,2, Shangshang Ding1,2, Tianli Zheng1,2, Xingshuai Zhu1,2, Zhiheng Gu1,2, Bin Ye1,2, Weiwei Fu3,4.
Abstract
BACKGROUND: Refractive error detection is a significant factor in preventing the development of myopia. To improve the efficiency and accuracy of refractive error detection, a refractive error detection network (REDNet) is proposed that combines the advantages of a convolutional neural network (CNN) and a recurrent neural network (RNN). It not only extracts the features of each image, but also fully utilizes the sequential relationship between images. In this article, we develop a system to predict the spherical power, cylindrical power, and spherical equivalent in multiple eccentric photorefraction images. Approach First, images of the pupil area are extracted from multiple eccentric photorefraction images; then, the features of each pupil image are extracted using the REDNet convolution layers. Finally, the features are fused by the recurrent layers in REDNet to predict the spherical power, cylindrical power, and spherical equivalent.Entities:
Keywords: Convolutional neural network; Deep learning; Image processing; Myopia; Photorefraction; Refractive error
Mesh:
Year: 2022 PMID: 35941613 PMCID: PMC9360706 DOI: 10.1186/s12938-022-01025-3
Source DB: PubMed Journal: Biomed Eng Online ISSN: 1475-925X Impact factor: 3.903
Detection results for different network structures
| Network | MAE (D) | Accuracy (%) | No. parameters |
|---|---|---|---|
| Vgg16 | 0.7916 | 45.41 | 19.19 M |
| ResNet18 | 0.8617 | 42.34 | 11.18 M |
| Xception | 0.3204 | 82.35 | 7.24 M |
| Mini-Xception | 0.3440 | 79.26 | 0.79 M |
| Ours |
Bold values indicate the best results
Experimental results for different feature fusion methods
| Methods | MAE (D) | Accuracy (%) | MAE (D) | Accuracy (%) | MAE (D) | Accuracy (%) |
|---|---|---|---|---|---|---|
| Spherical component | Cylindrical component | Spherical equivalent | ||||
| Addition | 0.2818 | 83.06 | 0.1210 | 96.39 | 0.2593 | 85.98 |
| Concatenation | 0.3663 | 74.02 | 0.1590 | 96.59 | 0.3196 | 78.19 |
| AFF | 0.2443 | 87.05 | 0.0761 | 96.40 | 0.2109 | 89.00 |
| LSTM | ||||||
Bold values indicate the best results
Fig. 6Eccentric photorefraction images of the same pupil with a meridian direction of a 0°, b 60°, c 120°, d 180°, e 240°, f 300°, and g 0° after contrast stretching
Experimental results for different activation functions
| Activation function | MAE (D) | Accuracy (%) | Time/step (ms) |
|---|---|---|---|
| Sigmoid | 0.3636 | 75.85 | 20 |
| ELU | 0.3072 | 83.15 | 21 |
| Swish | 0.2874 | 84.78 | 22 |
| Leaky ReLU | 0.2843 | 85.00 | 20 |
| ReLU | 0.2792 | 86.46 | 20 |
| ReLU6 |
Bold values indicate the best results
Fig. 1ROC curve and AUC value
Fig. 2Visualization heat map of different refractive error
Experimental results for the other two networks
| Networks | RNN | Gate | MAE (D) | Accuracy (%) |
|---|---|---|---|---|
| REDNet-N | 0.2890 | 84.32 | ||
| REDNet-SimpleRNN | √ | 0.3428 | 80.04 | |
| REDNet | √ | √ |
Bold values indicate the best results
Fig. 3Overall structure of the proposed REDNet
Fig. 4Method of data collection
Fig. 5Example of a captured face image
Data distribution before and after data cleaning
| Characteristic | Before cleaning | After cleaning |
|---|---|---|
| Number of image groups | 6146 | 6074 |
| Severe myopia (SE < -6 D) | 1708 | 1699 |
| Moderate myopia (−6 D ≤ SE < -3 D) | 2395 | 2378 |
| Mild myopia (−3 D ≤ SE < 0 D) | 1919 | 1879 |
| Emmetropia and hyperopia (SE ≥ 0 D) | 124 | 118 |
Fig. 7Structure of the constructed CNN