| Literature DB >> 30275989 |
Gen-Min Lin1,2,3, Mei-Juan Chen1, Chia-Hung Yeh4,5, Yu-Yang Lin4, Heng-Yu Kuo1, Min-Hui Lin4, Ming-Chin Chen1, Shinfeng D Lin6, Ying Gao7, Anran Ran8, Carol Y Cheung8.
Abstract
Entropy images, representing the complexity of original fundus photographs, may strengthen the contrast between diabetic retinopathy (DR) lesions and unaffected areas. The aim of this study is to compare the detection performance for severe DR between original fundus photographs and entropy images by deep learning. A sample of 21,123 interpretable fundus photographs obtained from a publicly available data set was expanded to 33,000 images by rotating and flipping. All photographs were transformed into entropy images using block size 9 and downsized to a standard resolution of 100 × 100 pixels. The stages of DR are classified into 5 grades based on the International Clinical Diabetic Retinopathy Disease Severity Scale: Grade 0 (no DR), Grade 1 (mild nonproliferative DR), Grade 2 (moderate nonproliferative DR), Grade 3 (severe nonproliferative DR), and Grade 4 (proliferative DR). Of these 33,000 photographs, 30,000 images were randomly selected as the training set, and the remaining 3,000 images were used as the testing set. Both the original fundus photographs and the entropy images were used as the inputs of convolutional neural network (CNN), and the results of detecting referable DR (Grades 2-4) as the outputs from the two data sets were compared. The detection accuracy, sensitivity, and specificity of using the original fundus photographs data set were 81.80%, 68.36%, 89.87%, respectively, for the entropy images data set, and the figures significantly increased to 86.10%, 73.24%, and 93.81%, respectively (all p values <0.001). The entropy image quantifies the amount of information in the fundus photograph and efficiently accelerates the generating of feature maps in the CNN. The research results draw the conclusion that transformed entropy imaging of fundus photographs can increase the machinery detection accuracy, sensitivity, and specificity of referable DR for the deep learning-based system.Entities:
Year: 2018 PMID: 30275989 PMCID: PMC6151683 DOI: 10.1155/2018/2159702
Source DB: PubMed Journal: J Ophthalmol ISSN: 2090-004X Impact factor: 1.909
Figure 1The diagram of all layers in CNN.
Figure 2The distribution of accuracy vs. various block sizes (n) for the detection of referable DR of the entropy images.
Figure 3Original fundus photographs and entropy images of DR of any grade (0–4).
The performance between the original photographs and entropy images.
| Original photographs (%) | Entropy images (%) |
| |
|---|---|---|---|
| Accuracy | 81.80 | 86.10 | <0.001 |
| Sensitivity | 68.36 | 73.24 | <0.001 |
| Specificity | 89.87 | 93.81 | <0.001 |
Figure 4The AUC for the discrimination of automated interpretation for referable DR in (a) original photographs and (b) entropy images.