| Literature DB >> 36246007 |
Kanae Fukutsu1, Michiyuki Saito1, Kousuke Noda1,2, Miyuki Murata1,2, Satoru Kase1, Ryosuke Shiba3, Naoki Isogai3, Yoshikazu Asano4, Nagisa Hanawa4, Mitsuru Dohke4, Manabu Kase5, Susumu Ishida1,2.
Abstract
Purpose: To develop a novel evaluation system for retinal vessel alterations caused by hypertension using a deep learning algorithm. Design: Retrospective study. Participants: Fundus photographs (n = 10 571) of health-check participants (n = 5598).Entities:
Keywords: AA, total arteriolar area; AUC, area under the receiver operating characteristic curve; AVR, arteriovenous ratio; Arteriosclerosis; BP, blood pressure; DBP, diastolic blood pressure; DRIVE, Digital Retinal Images for Vessel Extraction; Deep learning system; FN, false-negative; FP, false-positive; FPa, FP arterioles; FPv, FP venules; Hypertensive retinopathy; Imaging; MISCa, misclassification rates of arterioles; MISCv, misclassification rates of venules; RGB, red-green-blue; Retinal arteriolar narrowing; SBP, systolic blood pressure; TN, true-negative; TP, true-positive; TPa, TP arterioles; TPv, TP venules; VA, total venular area
Year: 2021 PMID: 36246007 PMCID: PMC9560649 DOI: 10.1016/j.xops.2021.100004
Source DB: PubMed Journal: Ophthalmol Sci ISSN: 2666-9145
Figure 1Proposed deep learning method. A, Schematic view of the deep learning model. The numbers written beside each layer represent the number of feature maps × width (pixels) × height (pixels). B, Detailed expositions of the DownBlock, UpBlock, multiple dilated convolutional (MDC) block, and signs (arrows and layers). C, Representative input image, manually annotated ground truth, and automatic vessel segmentation of the digital retinal images for vessel extraction dataset (top row) and representative input image, manually annotated ground truth, and automatic vessel segmentation of the Hokudai dataset (bottom row).
Figure 2Verification of automatic vessel segmentation and arteriovenous classification. A, Representative input image, ground truth image, output image by automatic vessel segmentation, and merged image (top row). In merged images, yellow pixels are regarded as false negative results, pink pixels are regarded as false positive results, white pixels are regarded as true positive results, and black pixels are regarded as true negative results. Magnified images of the boxed area in each image above appear in the bottom row. B, Representative input image (left). Representative predicted arteriole image (middle). Red pixels represent the area belonging to the arteriole in ground truth and predicted as an arteriole by the deep learning program. Blue pixels represent the area belonging to the venule in ground truth, but predicted as an arteriole by the deep learning program. Representative predicted venule image (right). Blue pixels represent the area belonging to the venule in ground truth and predicted as a venule by the deep learning program. Red pixels represent the area belonging to the arteriole in ground truth, but predicted as a venule by the deep learning program.
Comparison of Vessel Segmentation Algorithms
| Method | Authors | Year | Data Set | Sensitivity | Specificity | Overall Accuracy |
|---|---|---|---|---|---|---|
| Ensemble classifiers-based methods | Orlando et al | 2014 | DRIVE | 0.78 | 0.97 | N/A |
| Orlando et al | 2017 | DRIVE | 0.79 | 0.97 | N/A | |
| Lupascu et al | 2010 | DRIVE | 0.67 | 0.99 | 0.96 | |
| Fraz et al | 2012 | DRIVE | 0.74 | 0.98 | 0.95 | |
| Statistical learning-based methods | Staal et al | 2004 | DRIVE | N/A | N/A | 0.94 |
| Soares et al | 2006 | DRIVE | N/A | N/A | 0.95 | |
| Neural network | Marin et al | 2011 | DRIVE | 0.71 | 0.98 | 0.94 |
| Vega et al | 2014 | DRIVE | 0.74 | 0.96 | 0.94 | |
| Wang et al | 2015 | DRIVE | 0.82 | 0.97 | 0.98 | |
| Li et al | 2016 | DRIVE | 0.76 | 0.98 | 0.95 | |
| Mo et al | 2017 | DRIVE | 0.78 | 0.98 | 0.95 | |
| Xu et al | 2018 | DRIVE | 0.94 | 0.96 | 0.95 | |
| Yan et al | 2018 | DRIVE | 0.76 | 0.98 | 0.95 | |
| Proposed method | 2021 | DRIVE | 0.78 | 0.99 | 0.97 |
DRIVE = Digital Retinal Images for Vessel Extraction; N/A = not available.
Figure 3Total arteriolar and venular areas. A, Representative visualization of the input image and predicted arteriole and venule images using the proposed deep learning method. B, Graph showing distributions of the total arteriolar and venular areas measured by the proposed algorithm. C, Graph showing correlation between the total arteriolar and venular areas. Solid lines show 95% confidence intervals. Dotted lines show 95% prediction intervals. R = 0.58, n = 10 571, and P < 0.001.
Figure 4Graphs showing the correlation between the retinal vascular area and age or blood pressure. A, Correlation between the total arteriolar area and age (left); R = –0.32, n = 10 571, and P < 0.001. Correlation between the total venular area and age (right); R = –0.54, n = 10 571, and P < 0.001. B, Correlation between (top row) systolic blood pressure (SBP) and the total arteriolar area (R = –0.29, n = 10 571, and P < 0.001) or the total venular area (R = –0.25, n = 10 571, and P < 0.001) and between (bottom row) diastolic blood pressure (DBP) and the total arteriolar area (R = –0.26, n = 10 571, and P < 0.001) or the total venular area (R = –0.22, n = 10 571, and P < 0.001).
Figure 5Graphs showing arteriovenous ratio (AVR) versus the arteriolar area as an index of blood pressure. A, Correlations between (left) systolic blood pressure (SBP; R = –0.27, n = 102, and P < 0.01) or (right) diastolic blood pressure (DBP; R = –0.25, n = 102, and P < 0.05) and AVR. B, Correlations between (left) SBP (R = –0.31, n = 102, and P < 0.01) or (right) DBP (R = –0.26, n = 102, and P < 0.01) and the total arteriolar area.