| Literature DB >> 35845987 |
Guanghua Zhang1,2, Bin Sun3, Zhaoxia Zhang3, Jing Pan4, Weihua Yang5, Yunfang Liu6.
Abstract
Diabetic retinopathy (DR) is one of the most threatening complications in diabetic patients, leading to permanent blindness without timely treatment. However, DR screening is not only a time-consuming task that requires experienced ophthalmologists but also easy to produce misdiagnosis. In recent years, deep learning techniques based on convolutional neural networks have attracted increasing research attention in medical image analysis, especially for DR diagnosis. However, dataset labeling is expensive work and it is necessary for existing deep-learning-based DR detection models. For this study, a novel domain adaptation method (multi-model domain adaptation) is developed for unsupervised DR classification in unlabeled retinal images. At the same time, it only exploits discriminative information from multiple source models without access to any data. In detail, we integrate a weight mechanism into the multi-model-based domain adaptation by measuring the importance of each source domain in a novel way, and a weighted pseudo-labeling strategy is attached to the source feature extractors for training the target DR classification model. Extensive experiments are performed on four source datasets (DDR, IDRiD, Messidor, and Messidor-2) to a target domain APTOS 2019, showing that MMDA produces competitive performance for present state-of-the-art methods for DR classification. As a novel DR detection approach, this article presents a new domain adaptation solution for medical image analysis when the source data is unavailable.Entities:
Keywords: convolutional neural network; deep learning; diabetic retinopathy classification; domain adaptation; multi-model
Year: 2022 PMID: 35845987 PMCID: PMC9284280 DOI: 10.3389/fphys.2022.918929
Source DB: PubMed Journal: Front Physiol ISSN: 1664-042X Impact factor: 4.755
FIGURE 1The work flow of our method. We train the target prediction model to simply use pre-trained multiple source models and unlabeled target retinal images.
Label distributions of DDR, IDRiD, Messidor, Messidor-2, and APTOS 2019 datasets.
| Dataset | Type | Non-referable | Referable | |||
|---|---|---|---|---|---|---|
| No | Mild | Moderate | Severe | Proliferative | ||
| DDR | Source | 6,266 | 630 | 4,477 | 236 | 913 |
| IDRiD | Source | 168 | 25 | 168 | 93 | 62 |
| Messidor | Source | 546 | 153 | 247 | 254 | — |
| Messidor-2 | Source | 1,017 | 270 | 347 | 75 | 35 |
| APTOS 2019 | Target | 1,805 | 370 | 999 | 193 | 295 |
FIGURE 2Representative retinal images adopting our preprocessing techniques. From top to bottom, the representative images are sampled from no DR, moderate DR, and proliferative DR, respectively. The parts (A–C) denote the original, resized and cropped, and enhanced retinal images.
FIGURE 3The overview of MMDA architecture. After preprocessing, we obtain the features of target retinal images by source models and target model f , and calculate the weight of each model μ using the single-layer neural network. The output of the target classifier is defined by the source classifiers with fixed parameters. Pseudo labels for each retinal image x are obtained after the process of feature-level clustering-based pseudo-labeling.
Accuracy and sensitivity of MMDA for diabetic retinopathy diagnosis compared with state-of-the-art supervised learning approaches on the APTOS 2019 dataset.
| Method | Accuracy (%) | Sensitivity (%) |
|---|---|---|
| Xie et al. (2017) | 92.8 | 86.8 |
| Vives-Boix and Ruiz-Fernández (2021) | 94.5 | 90.0 |
| Narayanan et al. (2020) | 98.4 | 98.9 |
| Farag et al. (2022) | 97.0 | 97.0 |
| MMDA (Ours) | 90.6 | 98.5 |
The DR classification results of MMDA with different backbones on the APTOS 2019 dataset.
| Backbone | Method | Accuracy | Sensitivity |
|
|---|---|---|---|---|
| VGG-16 | ACDA | 0.873 | 0.965 |
|
| APDA | 0.882 | 0.973 |
| |
| MMDA |
|
| - | |
| ResNet-50 | ACDA | 0.880 | 0.972 |
|
| APDA | 0.902 | 0.960 |
| |
| MMDA |
|
| - |
The best results are in bold.
DR classification results using different β on the APTOS 2019 dataset.
|
| 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
|---|---|---|---|---|---|
| Accuracy | 0.899 | 0.903 |
| 0.902 | 0.851 |
| Sensitivity | 0.984 | 0.962 |
| 0.987 | 0.986 |
The best results are in bold.
DR classification results using different γ on the APTOS 2019 dataset.
|
| 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
|---|---|---|---|---|---|
| Accuracy | 0.877 | 0.896 |
| 0.905 | 0.902 |
| Sensitivity | 0.976 | 0.982 |
| 0.966 | 0.963 |
The best results are in bold.
FIGURE 4ROC curve of DR diagnosis on the APTOS 2019 dataset.
FIGURE 5The t-SNE plot of DR classification on the APTOS 2019 dataset.