| Literature DB >> 34997031 |
Xinyue Li1,2,3, Chenjie Xia4, Xin Li5, Shuangqing Wei5, Sujun Zhou1,2, Xuhui Yu1,2, Jiayue Gao1,2, Yanpeng Cao6, Hong Zhang7,8.
Abstract
Diabetes can cause microvessel impairment. However, these conjunctival pathological changes are not easily recognized, limiting their potential as independent diagnostic indicators. Therefore, we designed a deep learning model to explore the relationship between conjunctival features and diabetes, and to advance automated identification of diabetes through conjunctival images. Images were collected from patients with type 2 diabetes and healthy volunteers. A hierarchical multi-tasking network model (HMT-Net) was developed using conjunctival images, and the model was systematically evaluated and compared with other algorithms. The sensitivity, specificity, and accuracy of the HMT-Net model to identify diabetes were 78.70%, 69.08%, and 75.15%, respectively. The performance of the HMT-Net model was significantly better than that of ophthalmologists. The model allowed sensitive and rapid discrimination by assessment of conjunctival images and can be potentially useful for identifying diabetes.Entities:
Mesh:
Year: 2022 PMID: 34997031 PMCID: PMC8742044 DOI: 10.1038/s41598-021-04006-z
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Overview of the study.
Figure 2Architectures of ResNet50 and HMT-Net. (a) Network architecture for F-Net (Q1) ~ F-Net (Q4). (b) Integrating F-Nets (with GAP and FC layers removed) into HMT-Net. GAP = Global Average Pooling. FC = Fully Connected.
Division of dataset in cross validation.
| Set1 | Set2 | Set3 | Set4 | Set5 | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| D | H | D | H | D | H | D | H | D | H | |
| number | 84 | 44 | 71 | 43 | 83 | 44 | 94 | 43 | 73 | 32 |
(D = diabetes, H = health).
Figure 3Visualization results of the classification networks of Q1 ~ Q4. Heat maps illustrate which part of the image contributes more to the classification results.
Figure 4Receiver operating characteristic curves for HMT-Net and other models. AUC = area under the receiver operating characteristic curve.
The comparison results between fivefold cross-validation and other algorithms.
| SE (%) | SP (%) | AUC | |
|---|---|---|---|
| HMT-Net | 78.70 | 69.08 | 0.82 |
| ResNet50 | 74.76 | 69.57 | 0.80 |
| MobileNetV2 | 72.38 | 73.22 | 0.79 |
| InceptionV3 | 78.68 | 62.80 | 0.78 |
SE = Sensitivity, SP = Specificity, ACC = Accuracy, AUC = Area under ROC curve.
Performance of artificial intelligence vs ophthalmologists in identifying diabetes based on conjunctival images.
| SE (%) | SP (%) | ACC (%) | Speed (s/image) | |
|---|---|---|---|---|
| HMT-Net | 78.70 | 69.08 | 75.15 | 1.24 |
| Human average | 59.02 | 42.11 | 51.59 | 7.76 |
| ophthalmologist 1 | 70.10 | 55.26 | 63.58 | 9.21 |
| ophthalmologist 2 | 62.89 | 38.16 | 52.0 | 6.24 |
| ophthalmologist 3 | 64.95 | 18.42 | 44.51 | 8.25 |
| ophthalmologist 4 | 38.14 | 56.68 | 46.24 | 7.35 |
SE = Sensitivity, SP = Specificity, ACC = Accuracy.