| Literature DB >> 35729197 |
Chuying Shi1, Jack Lee1, Gechun Wang2, Xinyan Dou3, Fei Yuan2, Benny Zee4.
Abstract
Image quality assessment is essential for retinopathy detection on color fundus retinal image. However, most studies focused on the classification of good and poor quality without considering the different types of poor quality. This study developed an automatic retinal image analysis (ARIA) method, incorporating transfer net ResNet50 deep network with the automatic features generation approach to automatically assess image quality, and distinguish eye-abnormality-associated-poor-quality from artefact-associated-poor-quality on color fundus retinal images. A total of 2434 retinal images, including 1439 good quality and 995 poor quality (483 eye-abnormality-associated-poor-quality and 512 artefact-associated-poor-quality), were used for training, testing, and 10-ford cross-validation. We also analyzed the external validation with the clinical diagnosis of eye abnormality as the reference standard to evaluate the performance of the method. The sensitivity, specificity, and accuracy for testing good quality against poor quality were 98.0%, 99.1%, and 98.6%, and for differentiating between eye-abnormality-associated-poor-quality and artefact-associated-poor-quality were 92.2%, 93.8%, and 93.0%, respectively. In external validation, our method achieved an area under the ROC curve of 0.997 for the overall quality classification and 0.915 for the classification of two types of poor quality. The proposed approach, ARIA, showed good performance in testing, 10-fold cross validation and external validation. This study provides a novel angle for image quality screening based on the different poor quality types and corresponding dealing methods. It suggested that the ARIA can be used as a screening tool in the preliminary stage of retinopathy grading by telemedicine or artificial intelligence analysis.Entities:
Mesh:
Year: 2022 PMID: 35729197 PMCID: PMC9213403 DOI: 10.1038/s41598-022-13919-2
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Review of studies using CNN method to assess retinal image quality.
| Author | Year | Database | Method (architecture) | Category of image quality | Definition of classification | Performance |
|---|---|---|---|---|---|---|
| Mahapatra et al.[ | 2016 | A DR screening initiative | CNN, and local saliency map | Gradable, ungradable | NA | Se: 98.2%, Sp: 97.8%, Acc: 97.9% |
| Yu et al.[ | 2017 | Kaggle | CNN(Alexnet), and saliency map | Good, poor | NA | Se: 96.63%, Sp: 93.10%, Acc: 95.42% |
| Saha et al.[ | 2017 | EyePACS | CNN (Alexnet) | Accept, reject, ambiguous | Yes | Acc: 100% |
| Zago et al.[ | 2018 | DRIMDB and ELSA-Brasil | CNN (Inception-v3) | Good, poor, outlier | NA | DRIMDB: Se: 97.10%, Sp:100.0%, AUC: 99.98% ELSA-Brasil: Se: 92.00%, Sp: 96.00%,AUC: 98.56% |
| Chalakkala et al.[ | 2019 | DRIMDB, DR1–DR2, HRF, MESSIDOR, UoA-DR, Kaggle, IDRiD, | Six pre-trained CNN (AlexNet, GoogLeNet, ResNet50, ResNet101, Inception-v3, SqueezeNet) | MSRI high quality, low quality | Some databases | Se: 98.38%; Sp: 95.19%; Acc: 97.47% |
| Shen et al.[ | 2020 | Shanghai Diabetic Retinopathy Screening Program | Multi-task deep learning framework (VGG16) | Gradable, ungradable | Yes | Se: 83.62%, Sp: 91.72%, AUC: 94.55% |
| Yuen et al.[ | 2021 | Primary dataset: CUHK Eye Center, National University Hospital External dataset: Hong Kong Children Eye Study, Queen’s University Belfast | Two CNN (EfficientNet-B0, MobileNetV2) | Gradable, ungradable | Yes | Se: 92.1%, Sp: 98.3%, Acc: 92.5%, AUC: 97.5% |
DL deep learning, CNN convolutional neural network, MSRI medically suitable retinal image, NA not available, Se sensitivity, Sp specificity, Acc accuracy, AUC area under the curve.
Summary of images in the primary dataset used for training, testing, and 10-fold cross validation.
| Category | Subgroup | Number | Subgroup | Total |
|---|---|---|---|---|
| Good | 1439 (59.12) | |||
| Poor | Eye-abnormality-associated-poor-quality | 483 (19.84) | 995 (40.88) | |
| Cataract | 220 (9.04) | |||
| CRVO | 118 (4.85) | |||
| AH | 83 (3.41) | |||
| VO | 62 (2.55) | |||
| Artefact-associated-poor-quality | 512 (21.04) | |||
| Total | 2434 (100) | |||
CRVO central retinal vein occlusion, AH Asteroid hyalosis, VO Vitreous opaque.
Figure 1Examples of (a) good quality, (b) artefact-associated-poor-quality, and eye-abnormality-associated-poor-quality including (c) cataract, (d) central retinal vein occlusion (CRVO), (e) asteroid hyalosis, and (f) vitreous opaque.
Definition of image quality classification.
| Image quality | Visibility | Clarity |
|---|---|---|
| Good | Artefacts or eye abnormalities cover less than 1/4 of image | Level III vascular arches are visible |
| Eye abnormality associated | Eye abnormalities cover more than 1/4 of image | Level III vascular arches or larger vascular arches are invisible |
| Artefact associated | Artefacts cover more than 1/4 of image | Level III vascular arches or larger vascular arches are invisible |
Summary of images in the external validation dataset taken by TOPCON TRC-NW100 non-mydriatic retinal camera and TOPCON TRC-50DC retinal camera from the hospital.
| Category | Subgroup | Number n (%) | Subgroup total n (%) | Total n (%) |
|---|---|---|---|---|
| Good | 198 (55.62) | |||
| Poor | Eye-abnormality-associated-poor-quality | 104 (29.21) | 158 (44.38) | |
| Cataract | 56 (15.73) | |||
| CRVO | 32 (8.99) | |||
| VO | 16 (4.49) | |||
| Artefact-associated-poor-quality | 54 (15.17) | |||
| Total | 356 (100) | |||
CRVO central retinal vein occlusion, VO Vitreous opaque.
Figure 2Flowchart of the presented method in image quality classification. ResNet50 Residual network that is 50 layers deep, Glmnet Generalized linear model via penalized maximum likelihood, RF Random forest, SVM Support vector machine.
The performance of ARIA in testing dataset.
| Testing (using RF) | Ten-fold cross validation (using SVM) | |||||
|---|---|---|---|---|---|---|
| Se (%) | Sp (%) | Acc (%) | Se (%) | Sp (%) | Acc (%) | |
| Good quality VS poor quality | 98.0 | 99.1 | 98.6 | 99.0 | 98.3 | 98.6 |
| Good quality VS EAAPQ | 98.6 | 99.8 | 99.5 | 99.0 | 99.7 | 99.5 |
| Good quality VS AAPQ | 98.7 | 99.8 | 99.5 | 99.8 | 99.6 | 99.6 |
| EEAPQ VS AAPO | 93.8 | 92.2 | 93.0 | 93.8 | 92.2 | 93.0 |
RF Random forest, SVM Support vector machine, EAAPQ Eye-abnormality-associated-poor-quality, AAPQ Artefact-associated-poor-quality, Se Sensitivity, Sp Specificity, Acc Accuracy.
Figure 3Confusion matrix of testing (1) and 10-fold cross validation (2). (a(1), a(2)) Good quality VS Poor quality; (b(1), b(2)) Good quality VS Eye-abnormality-associated-poor-quality; (c(1), c(2)) Good quality VS Artefact-associated-poor-quality; (d(1), d(2)) Eye-abnormality-associated-poor-quality VS Artefact-associated-poor-quality.
Figure 4(a) The performance of ARIA to differentiate good quality from poor quality in the external validation dataset; (b) The performance of ARIA to differentiate artefact-associated-poor-quality from eye-abnormality-associated-poor-quality in the external validation dataset.
The misclassification of ARIA in differentiating artefact-associated-poor-quality from eye-abnormality-associated-poor-quality.
| No | Proportion (%) | |
|---|---|---|
| Blur | 3 | 75.0 |
| Underexposure | 1 | 25.0 |
| Total | 4 | 100.0 |
| CRVO | 7 | 53.8 |
| Vitreous opaque | 6 | 46.2 |
| Total | 13 | 100.0 |
CRVO central retinal vein occlusion.
Figure 5Violin plot for the probability of ARIA output in differentiating artefact-associated-poor-quality from eye-abnormality-associated-poor-quality in the external validation dataset (predicted value of probability: 0: Artefact-associated-poor-quality; 1: Eye-abnormality-associated-poor-quality).