Literature DB >> 36246007

A Deep Learning Architecture for Vascular Area Measurement in Fundus Images.

Kanae Fukutsu1, Michiyuki Saito1, Kousuke Noda1,2, Miyuki Murata1,2, Satoru Kase1, Ryosuke Shiba3, Naoki Isogai3, Yoshikazu Asano4, Nagisa Hanawa4, Mitsuru Dohke4, Manabu Kase5, Susumu Ishida1,2.   

Abstract

Purpose: To develop a novel evaluation system for retinal vessel alterations caused by hypertension using a deep learning algorithm. Design: Retrospective study. Participants: Fundus photographs (n = 10 571) of health-check participants (n = 5598).
Methods: The participants were analyzed using a fully automatic architecture assisted by a deep learning system, and the total area of retinal arterioles and venules was assessed separately. The retinal vessels were extracted automatically from each photograph and categorized as arterioles or venules. Subsequently, the total arteriolar area (AA) and total venular area (VA) were measured. The correlations among AA, VA, age, systolic blood pressure (SBP), and diastolic blood pressure were analyzed. Six ophthalmologists manually evaluated the arteriovenous ratio (AVR) in fundus images (n = 102), and the correlation between the SBP and AVR was evaluated manually. Main Outcome Measures: Total arteriolar area and VA.
Results: The deep learning algorithm demonstrated favorable properties of vessel segmentation and arteriovenous classification, comparable with pre-existing techniques. Using the algorithm, a significant positive correlation was found between AA and VA. Both AA and VA demonstrated negative correlations with age and blood pressure. Furthermore, the SBP showed a higher negative correlation with AA measured by the algorithm than with AVR. Conclusions: The current data demonstrated that the retinal vascular area measured with the deep learning system could be a novel index of hypertension-related vascular changes.
© 2021 by the American Academy of Ophthalmology.

Entities:  

Keywords:  AA, total arteriolar area; AUC, area under the receiver operating characteristic curve; AVR, arteriovenous ratio; Arteriosclerosis; BP, blood pressure; DBP, diastolic blood pressure; DRIVE, Digital Retinal Images for Vessel Extraction; Deep learning system; FN, false-negative; FP, false-positive; FPa, FP arterioles; FPv, FP venules; Hypertensive retinopathy; Imaging; MISCa, misclassification rates of arterioles; MISCv, misclassification rates of venules; RGB, red-green-blue; Retinal arteriolar narrowing; SBP, systolic blood pressure; TN, true-negative; TP, true-positive; TPa, TP arterioles; TPv, TP venules; VA, total venular area

Year:  2021        PMID: 36246007      PMCID: PMC9560649          DOI: 10.1016/j.xops.2021.100004

Source DB:  PubMed          Journal:  Ophthalmol Sci        ISSN: 2666-9145


Hypertension and arteriosclerosis are major public health problems worldwide., Approximately 30% of adults worldwide have hypertension, and 10.4 billion deaths have been related to high systolic blood pressure (SBP) in the past 3 decades. Because hypertension causes morphologic changes in microvasculature, practical techniques to evaluate hypertension-related vessel alterations have been explored., The transparent structure of the eye enables us to examine the retinal vasculature directly; therefore, fundus examination has been used to assess alterations in microvasculature in patients with hypertension. The retinal arteriovenous ratio (AVR), the ratio between retinal arteriolar and venular diameters, is a classic index to evaluate retinal arteriolar narrowing, which is used widely and routinely in clinical settings. An AVR of 2:3 is considered healthy, and AVR decreases with age and blood pressure (BP) elevation. Because retinal arteriolar narrowing is related to the risk of various systemic diseases including diabetes, cardiovascular disease,, and cerebrovascular complications, estimating AVR estimation has been a simplified but useful clinical technique in routine ophthalmic practice. However, despite AVR being easy to use, an ophthalmoscopic evaluation of retinal AVR is subjective and lacks both intragrader and intergrader repeatabilities. Therefore, extensive efforts have been made to improve the shortcomings and revolutionize AVR estimation using a scientific approach. Consequently, a semiautomated system was developed to calculate the retinal AVR using the diameters of all arterioles and venules coursing in a specified area surrounding the optic disc in fundus photographs. However, it was a semiautomated method supported by human graders to choose the vessel segments, which may hinder objective manipulation to analyze retinal vessels. Therefore, to establish a more accurate and standardized vascular measurement method and to assess a large number of subjects, an automatic vessel segmentation method with high accuracy is necessary. The aim of the present study was to develop a fully automatic architecture assisted by a deep learning system to measure separately the total area of retinal arterioles and venules in fundus images.

Methods

Deep Learning Architecture

Figure 1A shows the neural network process used in this study. The neural network has an encoder–decoder structure, similar to the U-Net, which is a traditional neural network model used previously for semantic segmentation. A fundus photograph with red-green-blue (RGB) channels containing 704 × 704 pixels is used as an input image. From the 1 deep learning tree, 2 probability maps of arterioles and venules are produced as outputs. The probability map is binarized with a threshold set as 125. The threshold value was determined experimentally. In Figure 1A, the blue bars with black border represent DownBlock, whereas the orange and yellow bars represent the UpBlock and multiple dilated convolutional block, respectively. The left-side path consists of repeated DownBlocks connected to the corresponding UpBlocks. The connections are called skip connections (Fig 1B, bold blue arrows). In addition to the skip connections between DownBlocks and UpBlocks, each DownBlock and UpBlock has additional short connections internally, similar to ResBlock. The distinct skip connections reduce the gradient loss in backpropagation and solve the vanishing gradient problem as the network becomes deeper. The multiple dilated convolutional block consists of 4 dilated convolution layers with different strides. It is placed between the left-side encoder and the right-side decoder, contributing to capturing the global features. Sigmoid function is used to transform the output of the network into a probability map.
Figure 1

Proposed deep learning method. A, Schematic view of the deep learning model. The numbers written beside each layer represent the number of feature maps × width (pixels) × height (pixels). B, Detailed expositions of the DownBlock, UpBlock, multiple dilated convolutional (MDC) block, and signs (arrows and layers). C, Representative input image, manually annotated ground truth, and automatic vessel segmentation of the digital retinal images for vessel extraction dataset (top row) and representative input image, manually annotated ground truth, and automatic vessel segmentation of the Hokudai dataset (bottom row).

Proposed deep learning method. A, Schematic view of the deep learning model. The numbers written beside each layer represent the number of feature maps × width (pixels) × height (pixels). B, Detailed expositions of the DownBlock, UpBlock, multiple dilated convolutional (MDC) block, and signs (arrows and layers). C, Representative input image, manually annotated ground truth, and automatic vessel segmentation of the digital retinal images for vessel extraction dataset (top row) and representative input image, manually annotated ground truth, and automatic vessel segmentation of the Hokudai dataset (bottom row).

Training Methods

We implemented the neural network on NNabla version 0.9.9 (Sony Corporation). Training images were augmented randomly by flipping the images horizontally and rotating them within a 0.26 radian before inputting them into the neural network. To minimize the overhead and use the graphic processing unit memory maximally, we prioritized the size of input images over the batch size. For the NVIDIA GTX1080 graphic processing unit, we chose 704 × 704 square pixels and reduced the batch size to 2 samples. The epoch size was set as 1000 with the early stopping method. Binary cross-entropy loss function and Adam optimizer was used with the following parameters: initial learning rate, 0.001; α, 0.001; β1, 0.9; β2, 0.999; and ε, 1E-8.

Datasets

A public dataset known as Digital Retinal Images for Vessel Extraction (DRIVE) was used to evaluate our deep learning algorithm for comparison with other methods. Our original Hokudai dataset consisting of fundus images acquired at the Keijinkai Maruyama Clinic and Hokkaido University Hospital also was used to develop the deep learning algorithm. Blurred fundus images resulting from media opacities or inadequate imaging conditions were excluded. The institutional review boards for clinical research of the Keijinkai Maruyama Clinic (identifier, 20120626-1) and Hokkaido University Hospital (identifier, 012-0106) approved the study protocol. The requirement for informed consent was waived because of the retrospective nature of the study. This research adhered to the tenets of the Declaration of Helsinki. The Hokudai dataset contained 102 color fundus photographs obtained from patients who visited the Keijinkai Maruyama Clinic for regular health checkups using an autofundus camera (AFC-330; Nidek, LLC, Tokyo, Japan). The mean age was 52 ± 8 years, the mean SBP was 124 ± 13 mmHg, and the mean diastolic BP (DBP) was 79 ± 10 mmHg. The corresponding ground truth images were generated by manual annotation of retinal vessels by 2 ophthalmologists (M.S. and K.F.) in a precise fashion. The Hokudai dataset then was divided into 82 images as the training set, 10 images as the validation set, and 10 images as the test set. The DRIVE dataset, a public dataset containing 40 color fundus photographs from a diabetic retinopathy screening program in The Netherlands, has been used widely to evaluate the accuracy of automatic retinal vessel segmentation methods.15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27 The DRIVE dataset contains 20 images as the training set, and 20 images for validation and testing of the deep learning algorithm. The size of each photograph in the DRIVE dataset is 565 × 584 pixels. To apply our deep learning method, which accepts images with 704 × 704 pixels as input data, each input image was pasted on a black background mount measuring 704 × 704 pixels. The verification process was conducted based on the original size.

Verification of Vessel Segmentation Algorithms

To evaluate the vessel segmentation ability of our deep learning architecture using the DRIVE dataset, the algorithm was arranged to produce 1 output image. We combined the probability maps of arterioles and venules that our network output to create a probability map of vessels to evaluate the neural network trained using the Hokudai dataset. Accuracy of vessel segmentation in the predicted images was evaluated by calculating the number of false-positive (FP) results, false-negative (FN) results, true-positive (TP) results, and true-negative (TN) results in pixel drawings of retinal vessel structures. Using the parameters, indices to evaluate the accuracy of the deep learning system, such as sensitivity, specificity, overall accuracy, Dice coefficient, and area under the receiver operating characteristic curve (AUC), also were calculated from the equations below: The AUC was calculated from sensitivity and specificity using scikit-learn module 0.19.1. using Python version 3.6.4.

Verification of Arteriovenous Classification Algorithms

Accuracy of the arteriovenous classification in the predicted images was assessed by the misclassification rates of arterioles (MISCa) and misclassification rates of venules (MISCv), and overall accuracy of the arteriovenous classification calculated using TP arterioles (TPa), TP venules (TPv), FP arterioles (FPa), and FP venules (FPv) from the equations below: The number of the pixels identified as both arteriole and venule was calculated in the output images produced from the test set of Hokudai dataset (10 images).

Vascular Area Measurement

Color retinal photographs (n = 10 571) obtained from patients who visited the Keijinkai Maruyama Clinic for a regular health checkup were used to analyze the vascular area measured by the deep learning algorithm. The 102 images from the Hokudai dataset used for the training of the deep learning system are included. The mean age was 49 ± 10 years, the mean SBP was 117 ± 16 mmHg, and the mean DBP was 74 ± 11 mmHg. Predicted images of the arterioles and venules were generated from color fundus photographs using the trained neural network. The sum of each probability map was defined as total arteriolar area and total venular area.

Repeatability of Vascular Area Measurement

To examine the repeatability of vascular area measurement by the deep learning algorithm, 2 consecutive fundus photographs of both eyes of 10 healthy volunteers were obtained and the areas of arterioles and venules in each photograph were measured.

Arteriovenous Ratio Measurement

For a manual AVR measurement, we sought to extract approximately 100 photographs from the original fundus photograph set (n = 10 571). We used random stratification to extract the photographs, which obtain the same distributions of age and BP as the original population. Consequently, a total of 102 photographs were extracted as a result of random stratification. The mean age was 52 ± 9 years, the mean SBP was 117 ± 14 mmHg, and the mean DBP was 73 ± 9 mmHg. Subsequently, well-trained ophthalmologists manually evaluated the AVR of these photographs between 0 and 1 in 0.1 steps and used the average value as the representative AVR value. In accordance with a previous study, the evaluation was performed visually after choosing a pair of arterioles and matching the venules from the photographs. The intergrader agreement of AVR was analyzed by calculating the intraclass correlation coefficient value using RStudio version 1.1.456.

Statistical Analysis

Pearson’s product-moment correlation was used for the statistical analysis to calculate the correlation efficiency between vessel areas and other parameters using RStudio version 1.1.456 statistical software.

Results

Verification of the Vessel Segmentation Ability

In the present study, the proposed deep learning system output the predicted images in which retinal vessels were distinguished as clusters of arterioles and venules (Fig 1C). Using the predicted images, we assessed the accuracy of the newly developed deep learning system in automatic segmentation of arterioles and venules directly from fundus images, as reported previously (Fig 2A). In the assessment of the vessel segmentation ability in the DRIVE dataset, parametric statistics were as follows: sensitivity, 0.778; specificity, 0.985; overall accuracy, 0.967; Dice coefficient, 0.800; and AUC, 0.98. The present data indicated the favorable ability of the deep learning system for vessel segmentation (Table 1). Alternatively, using the Hokudai dataset, the statistics were as follows: sensitivity, 0.833; specificity, 0.994; overall accuracy, 0.983; Dice coefficient, 0.871; and AUC, 0.99.
Figure 2

Verification of automatic vessel segmentation and arteriovenous classification. A, Representative input image, ground truth image, output image by automatic vessel segmentation, and merged image (top row). In merged images, yellow pixels are regarded as false negative results, pink pixels are regarded as false positive results, white pixels are regarded as true positive results, and black pixels are regarded as true negative results. Magnified images of the boxed area in each image above appear in the bottom row. B, Representative input image (left). Representative predicted arteriole image (middle). Red pixels represent the area belonging to the arteriole in ground truth and predicted as an arteriole by the deep learning program. Blue pixels represent the area belonging to the venule in ground truth, but predicted as an arteriole by the deep learning program. Representative predicted venule image (right). Blue pixels represent the area belonging to the venule in ground truth and predicted as a venule by the deep learning program. Red pixels represent the area belonging to the arteriole in ground truth, but predicted as a venule by the deep learning program.

Table 1

Comparison of Vessel Segmentation Algorithms

MethodAuthorsYearData SetSensitivitySpecificityOverall Accuracy
Ensemble classifiers-based methodsOrlando et al2014DRIVE0.780.97N/A
Orlando et al2017DRIVE0.790.97N/A
Lupascu et al2010DRIVE0.670.990.96
Fraz et al2012DRIVE0.740.980.95
Statistical learning-based methodsStaal et al2004DRIVEN/AN/A0.94
Soares et al2006DRIVEN/AN/A0.95
Neural networkMarin et al2011DRIVE0.710.980.94
Vega et al2014DRIVE0.740.960.94
Wang et al2015DRIVE0.820.970.98
Li et al2016DRIVE0.760.980.95
Mo et al2017DRIVE0.780.980.95
Xu et al2018DRIVE0.940.960.95
Yan et al2018DRIVE0.760.980.95
Proposed method2021DRIVE0.780.990.97

DRIVE = Digital Retinal Images for Vessel Extraction; N/A = not available.

Verification of automatic vessel segmentation and arteriovenous classification. A, Representative input image, ground truth image, output image by automatic vessel segmentation, and merged image (top row). In merged images, yellow pixels are regarded as false negative results, pink pixels are regarded as false positive results, white pixels are regarded as true positive results, and black pixels are regarded as true negative results. Magnified images of the boxed area in each image above appear in the bottom row. B, Representative input image (left). Representative predicted arteriole image (middle). Red pixels represent the area belonging to the arteriole in ground truth and predicted as an arteriole by the deep learning program. Blue pixels represent the area belonging to the venule in ground truth, but predicted as an arteriole by the deep learning program. Representative predicted venule image (right). Blue pixels represent the area belonging to the venule in ground truth and predicted as a venule by the deep learning program. Red pixels represent the area belonging to the arteriole in ground truth, but predicted as a venule by the deep learning program. Comparison of Vessel Segmentation Algorithms DRIVE = Digital Retinal Images for Vessel Extraction; N/A = not available.

Verification of the Arteriovenous Classification Ability

We assessed the algorithm of the deep learning system for classification of vessels into arterioles and venules using the validity indices reported previously (Fig 2B). In the assessment of the arteriovenous classification ability in the Hokudai dataset, parametric statistics were as follows: MISCa, 1.065%; MISCv, 0.930%; and overall accuracy of the arteriovenous classification, 0.99. In comparison with the indices of the previously reported deep learning system to classify arterioles and venules in fundus images,,28, 29, 30, 31 the current deep learning system also showed a favorable ability of arteriovenous classification. For further verification, we calculated the number of pixels identified as both arteriole and venule. The average percentages of overlapping pixels of the total arteriole area, total venule area, and total pixels were 0.18%, 0.14%, and 0.006%, respectively.

Total Arteriolar and Venular Areas

Using the deep learning system, we automatically measured the total arteriolar and venular areas in fundus images (n = 10 571). The mean total area of arterioles was 12 929 ± 287 pixels per fundus image, whereas that of venules was 22 046 ± 3169 pixels per fundus image (Fig 3A, B). In addition, the arteriolar and venular areas showed a moderate positive correlation (R = 0.59; n = 10 571; P < 0.001; Fig 3C). The repeatability of vascular area measurement was evaluated using 2 consecutive fundus photographs of both eyes of 10 healthy volunteers, and the correlation coefficients of arteriole area and venule area were r = 0.8775429 (n = 20; P < 0.001) and r = 0.6809523 (n = 20; P < 0.001), respectively.
Figure 3

Total arteriolar and venular areas. A, Representative visualization of the input image and predicted arteriole and venule images using the proposed deep learning method. B, Graph showing distributions of the total arteriolar and venular areas measured by the proposed algorithm. C, Graph showing correlation between the total arteriolar and venular areas. Solid lines show 95% confidence intervals. Dotted lines show 95% prediction intervals. R = 0.58, n = 10 571, and P < 0.001.

Total arteriolar and venular areas. A, Representative visualization of the input image and predicted arteriole and venule images using the proposed deep learning method. B, Graph showing distributions of the total arteriolar and venular areas measured by the proposed algorithm. C, Graph showing correlation between the total arteriolar and venular areas. Solid lines show 95% confidence intervals. Dotted lines show 95% prediction intervals. R = 0.58, n = 10 571, and P < 0.001.

Correlation between the Retinal Vascular Area and Age

To investigate the relationship between the retinal vascular area and age, we assessed the correlation of age with the arteriolar and venular areas separately. Age showed negative correlations with the retinal arteriolar area (R = –0.32; n = 10 571; P < 0.001) and the retinal venular area (R = –0.54; n = 10 571; P < 0.001; Fig 4A).
Figure 4

Graphs showing the correlation between the retinal vascular area and age or blood pressure. A, Correlation between the total arteriolar area and age (left); R = –0.32, n = 10 571, and P < 0.001. Correlation between the total venular area and age (right); R = –0.54, n = 10 571, and P < 0.001. B, Correlation between (top row) systolic blood pressure (SBP) and the total arteriolar area (R = –0.29, n = 10 571, and P < 0.001) or the total venular area (R = –0.25, n = 10 571, and P < 0.001) and between (bottom row) diastolic blood pressure (DBP) and the total arteriolar area (R = –0.26, n = 10 571, and P < 0.001) or the total venular area (R = –0.22, n = 10 571, and P < 0.001).

Graphs showing the correlation between the retinal vascular area and age or blood pressure. A, Correlation between the total arteriolar area and age (left); R = –0.32, n = 10 571, and P < 0.001. Correlation between the total venular area and age (right); R = –0.54, n = 10 571, and P < 0.001. B, Correlation between (top row) systolic blood pressure (SBP) and the total arteriolar area (R = –0.29, n = 10 571, and P < 0.001) or the total venular area (R = –0.25, n = 10 571, and P < 0.001) and between (bottom row) diastolic blood pressure (DBP) and the total arteriolar area (R = –0.26, n = 10 571, and P < 0.001) or the total venular area (R = –0.22, n = 10 571, and P < 0.001).

Correlation between the Retinal Vascular Area and Blood Pressure

To investigate the relationship between the retinal vascular area and BP, we calculated the correlation of SBP and DBP with the arteriolar and venular areas separately. Systolic BP showed negative correlations with both the retinal arteriolar area (R = –0.29; n = 10 571; P < 0.001) and the retinal venular area (R = –0.25; n = 10 571; P < 0.001; Fig 4B). Similarly, DBP showed a negative correlation with both the retinal arteriolar area (R = –0.26; n = 10 571; P < 0.001) and the retinal venular area (R = –0.22; n = 10 571; P < 0.001; Fig 4B).

Arteriovenous Ratio versus Retinal Vascular Area Accuracy as an Index of Blood Pressure and Age

To assess the clinical significance of the retinal vascular area as an index of hypertension-related alterations in retinal vessels, the correlation coefficient between SBP or DBP and the retinal vascular area was compared with that between SBP or DBP and AVR. Arteriovenous ratio, which was evaluated manually by well-trained ophthalmologists, showed negative correlations with SBP (R = –0.27; n = 102; P < 0.01) and DBP (R = –0.25; n = 102; P < 0.05; Fig 5A). Likewise, the retinal arteriolar area showed negative correlations with SBP (R = –0.31; n = 102; P < 0.01) and DBP (R = –0.26; n = 102; P < 0.01; Fig 5B), indicating that retinal vascular area measurement by the deep learning architecture can be applied clinically for the evaluation of hypertension-related vessel alterations as a state-of-the-art technique. The intraclass correlation coefficient value of manually evaluated AVR was relatively low at 0.104.
Figure 5

Graphs showing arteriovenous ratio (AVR) versus the arteriolar area as an index of blood pressure. A, Correlations between (left) systolic blood pressure (SBP; R = –0.27, n = 102, and P < 0.01) or (right) diastolic blood pressure (DBP; R = –0.25, n = 102, and P < 0.05) and AVR. B, Correlations between (left) SBP (R = –0.31, n = 102, and P < 0.01) or (right) DBP (R = –0.26, n = 102, and P < 0.01) and the total arteriolar area.

Graphs showing arteriovenous ratio (AVR) versus the arteriolar area as an index of blood pressure. A, Correlations between (left) systolic blood pressure (SBP; R = –0.27, n = 102, and P < 0.01) or (right) diastolic blood pressure (DBP; R = –0.25, n = 102, and P < 0.05) and AVR. B, Correlations between (left) SBP (R = –0.31, n = 102, and P < 0.01) or (right) DBP (R = –0.26, n = 102, and P < 0.01) and the total arteriolar area.

Discussion

In the present study, we investigated the clinical usefulness of a novel deep learning architecture for retinal vessel segmentation and arteriovenous classification that enables 2-dimensional assessments of the retinal vasculature and provides a more accurate evaluation axis for hypertension-related vascular changes than pre-existing AVR evaluation methods. The architecture showed that the retinal venular area is larger than the retinal arteriolar area in fundus images, as expected based on previous findings obtained from vascular caliber measurements., In addition, we elucidated as a novel finding that the correlation between the retinal arteriolar area and BP was stronger than that between manually evaluated AVR and BP. The current data indicated that the automatic measurement of the retinal vascular area in fundus images could be an alternative index for hypertension-related retinal changes, which has been evaluated by AVR so far. Segmentation and classification of retinal vasculature are indispensable processes for the automated measurement of the total area of retinal arterioles and venules. Previously, several approaches such as a graph-based approach were challenged to develop a semiautomated system for segmentation and discrimination of arterioles and venules in fundus images. Thereafter, a deep learning strategy was proposed for automated segmentation and discrimination.,,,, In comparison with the previous systems, the current architecture sufficiently achieved desired accuracies of vessel segmentation and arteriovenous classification. The robustness of our method is presumably the result of the detailed manual annotation for the ground truth images generated by well-trained ophthalmologists. In addition, combined models and a multitude of skip connections might have enabled us to obtain the high accuracy of the current architecture. In the present study, using the novel deep learning architecture we found as expected that the retinal venular area was larger than the retinal arteriolar area. Because area calculation of retinal vasculature by a manual image analysis has been technically impossible, the present work is, to the best of our knowledge, the first attempt to measure the total area of retinal arterioles and venules. Anatomically, retinal arterioles and venules depicted in fundus photographs lie within the superficial nerve fiber layer of the retina, and the arteriolar diameter invariably is smaller than the venular diameter running parallel in the retina. Previous image analysis data demonstrated that the retinal venous caliber is larger than the retinal arterial caliber. In full-term infants, the mean arteriolar diameter in the retina is 85.5 μm, whereas the venular diameter is 130.0 μm. The mean retinal arteriolar and venular calibers expand to 162.7 and 226.8 μm, respectively, at 6 years of age, and subsequently, both retinal arteriolar and venular calibers decrease after middle age. In accordance with previous findings, the current data first showed that the retinal venular area is larger than the retinal arteriolar area in fundus images. Second, we found a negative correlation between age and retinal vascular areas. In particular, the venular area showed a stronger correlation with age than the arteriolar area, possibly because of the susceptibility of retinal arterioles against systemic variabilities, such as variations in BP. Human BP is associated with retinal vascular calibers. It was reported that narrowing or attenuation of the retinal arterioles is proportional to the degree of elevation of BP, and AVR evaluation has been used so far in clinical settings.,, However, it has been argued that an ophthalmoscopic evaluation of retinal AVR is subjective and lacks intergrader repeatability, which was proven to be quite low in this study. Recently, a more generalized method was established to calculate the summary indices reflecting the average width of retinal arterioles and retinal venules, that is, central retinal artery equivalent and central retinal vein equivalent. These indices showed that arteriolar narrowing was associated strongly with higher BP,, and that venule narrowing also was associated with BP elevation, independent of age. In our novel index, retinal arteriolar and venular areas showed the same tendency as reported previously. Furthermore, our present method had several advantages over the past ones. Whereas central retinal artery equivalent and central retinal vein equivalent were defined by measuring the width of retinal vessels between 0.5 and 1 disc diameters from the disc margin, evidence is insufficient to establish that this zone is the optimal region to evaluate the alterations in retinal vessels because of systemic disorders. In contrast, our method automatically and entirely assessed vascular areas in fundus photographs without any bias. Therefore, the vascular area measurement theoretically has the potential to assess the overall condition of retinal vasculature with a higher accuracy compared with the pre-existing methodology. A meta-analysis revealed that the association between the narrowed retinal arteriolar diameter and BP or hypertension was consistent across different ethnic samples and age groups, from children to older adults, and in both cross-sectional and longitudinal studies. In the present study, we also elucidated a robust association between the retinal arteriolar area and elevated BP. Moreover, the arteriolar area showed a stronger correlation with SBP compared with manually evaluated AVR in this study, suggesting that our novel approach is at least comparable with previous methods, such as AVR estimation and semiautomatic calculations of central retinal artery equivalent and central retinal vein equivalent. This study has several limitations. First, because the deep learning system adopts a multiclass multilabel classification, it is possible—albeit rare—for a single pixel to be classified into both arteriolar and venular area when the score for that pixel is above the threshold in both the artery and vein output images. Second, the measurable vascular area in fundus images using this architecture was restricted by several conditions, such as the angle of view. The ultra-widefield retinal imaging technology may boost the current concept to use the retinal vessel area instead of the vessel caliber. However, the versatility of our deep learning architecture for the imaging method with other settings was not examined in the current study. Third, participants enrolled in this study ranged from the middle-aged to the elderly, because generally young people do not require fundus photographs in regular health checkups. Investigating the retinal vascular area across a wider age range may shed light on the detailed aspect of this novel index. Finally, in the present study the vascular area was shown using dot pixels, but not absolute metric, because the refractive value and axial length data were not available, both of which are not measured at a routine health check-up in Japan. Further studies are mandatory to improve the quality of this deep learning architecture. In summary, we developed a novel deep learning architecture for retinal vessel segmentation that showed comparable accuracy as previous methods. The automatic approach for vessel classification into arterioles and venules enables us to address objectively hypertensive alteration of retinal vessels via vascular area measurement in an automatic fashion. A meta-analysis of longitudinal studies previously demonstrated the association between an antecedent increase in peripheral vascular resistance and subsequent development of hypertension. Therefore, this newly developed deep learning system potentially is useful in the prediction of hypertension.
  39 in total

1.  Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in the Atherosclerosis Risk in Communities Study.

Authors:  L D Hubbard; R J Brothers; W N King; L X Clegg; R Klein; L S Cooper; A R Sharrett; M D Davis; J Cai
Journal:  Ophthalmology       Date:  1999-12       Impact factor: 12.079

2.  Retinal microvascular abnormalities and MRI-defined subclinical cerebral infarction: the Atherosclerosis Risk in Communities Study.

Authors:  Lawton S Cooper; Tien Y Wong; Ronald Klein; A Richey Sharrett; R Nick Bryan; Larry D Hubbard; David J Couper; Gerardo Heiss; Paul D Sorlie
Journal:  Stroke       Date:  2005-11-23       Impact factor: 7.914

3.  Global burden of hypertension: analysis of worldwide data.

Authors:  Patricia M Kearney; Megan Whelton; Kristi Reynolds; Paul Muntner; Paul K Whelton; Jiang He
Journal:  Lancet       Date:  2005 Jan 15-21       Impact factor: 79.321

Review 4.  Epidemiology of Atherosclerosis and the Potential to Reduce the Global Burden of Atherothrombotic Disease.

Authors:  William Herrington; Ben Lacey; Paul Sherliker; Jane Armitage; Sarah Lewington
Journal:  Circ Res       Date:  2016-02-19       Impact factor: 17.367

Review 5.  Remodeling of resistance arteries in essential hypertension and effects of antihypertensive treatment.

Authors:  Ernesto L Schiffrin
Journal:  Am J Hypertens       Date:  2004-12       Impact factor: 2.689

6.  Retinal vascular calibre and the risk of coronary heart disease-related death.

Authors:  J J Wang; G Liew; T Y Wong; W Smith; R Klein; S R Leeder; P Mitchell
Journal:  Heart       Date:  2006-07-13       Impact factor: 5.994

Review 7.  Retinal arteriolar diameter and the prevalence and incidence of hypertension: a systematic review and meta-analysis of their association.

Authors:  Sky K H Chew; Jing Xie; Jie Jin Wang
Journal:  Curr Hypertens Rep       Date:  2012-04       Impact factor: 5.369

Review 8.  The macrocirculation and microcirculation of hypertension.

Authors:  François Feihl; Lucas Liaudet; Bernard Waeber
Journal:  Curr Hypertens Rep       Date:  2009-06       Impact factor: 5.369

Review 9.  Retinal vascular caliber measurements: clinical significance, current knowledge and future perspectives.

Authors:  M Kamran Ikram; Yi Ting Ong; Carol Y Cheung; Tien Y Wong
Journal:  Ophthalmologica       Date:  2012-09-20       Impact factor: 3.250

10.  Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks in 188 countries, 1990-2013: a systematic analysis for the Global Burden of Disease Study 2013.

Authors:  Mohammad H Forouzanfar; Lily Alexander; H Ross Anderson; Victoria F Bachman; Stan Biryukov; Michael Brauer; Richard Burnett; Daniel Casey; Matthew M Coates; Aaron Cohen; Kristen Delwiche; Kara Estep; Joseph J Frostad; K C Astha; Hmwe H Kyu; Maziar Moradi-Lakeh; Marie Ng; Erica Leigh Slepak; Bernadette A Thomas; Joseph Wagner; Gunn Marit Aasvang; Cristiana Abbafati; Ayse Abbasoglu Ozgoren; Foad Abd-Allah; Semaw F Abera; Victor Aboyans; Biju Abraham; Jerry Puthenpurakal Abraham; Ibrahim Abubakar; Niveen M E Abu-Rmeileh; Tania C Aburto; Tom Achoki; Ademola Adelekan; Koranteng Adofo; Arsène K Adou; José C Adsuar; Ashkan Afshin; Emilie E Agardh; Mazin J Al Khabouri; Faris H Al Lami; Sayed Saidul Alam; Deena Alasfoor; Mohammed I Albittar; Miguel A Alegretti; Alicia V Aleman; Zewdie A Alemu; Rafael Alfonso-Cristancho; Samia Alhabib; Raghib Ali; Mohammed K Ali; François Alla; Peter Allebeck; Peter J Allen; Ubai Alsharif; Elena Alvarez; Nelson Alvis-Guzman; Adansi A Amankwaa; Azmeraw T Amare; Emmanuel A Ameh; Omid Ameli; Heresh Amini; Walid Ammar; Benjamin O Anderson; Carl Abelardo T Antonio; Palwasha Anwari; Solveig Argeseanu Cunningham; Johan Arnlöv; Valentina S Arsic Arsenijevic; Al Artaman; Rana J Asghar; Reza Assadi; Lydia S Atkins; Charles Atkinson; Marco A Avila; Baffour Awuah; Alaa Badawi; Maria C Bahit; Talal Bakfalouni; Kalpana Balakrishnan; Shivanthi Balalla; Ravi Kumar Balu; Amitava Banerjee; Ryan M Barber; Suzanne L Barker-Collo; Simon Barquera; Lars Barregard; Lope H Barrero; Tonatiuh Barrientos-Gutierrez; Ana C Basto-Abreu; Arindam Basu; Sanjay Basu; Mohammed O Basulaiman; Carolina Batis Ruvalcaba; Justin Beardsley; Neeraj Bedi; Tolesa Bekele; Michelle L Bell; Corina Benjet; Derrick A Bennett; Habib Benzian; Eduardo Bernabé; Tariku J Beyene; Neeraj Bhala; Ashish Bhalla; Zulfiqar A Bhutta; Boris Bikbov; Aref A Bin Abdulhak; Jed D Blore; Fiona M Blyth; Megan A Bohensky; Berrak Bora Başara; Guilherme Borges; Natan M Bornstein; Dipan Bose; Soufiane Boufous; Rupert R Bourne; Michael Brainin; Alexandra Brazinova; Nicholas J Breitborde; Hermann Brenner; Adam D M Briggs; David M Broday; Peter M Brooks; Nigel G Bruce; Traolach S Brugha; Bert Brunekreef; Rachelle Buchbinder; Linh N Bui; Gene Bukhman; Andrew G Bulloch; Michael Burch; Peter G J Burney; Ismael R Campos-Nonato; Julio C Campuzano; Alejandra J Cantoral; Jack Caravanos; Rosario Cárdenas; Elisabeth Cardis; David O Carpenter; Valeria Caso; Carlos A Castañeda-Orjuela; Ruben E Castro; Ferrán Catalá-López; Fiorella Cavalleri; Alanur Çavlin; Vineet K Chadha; Jung-Chen Chang; Fiona J Charlson; Honglei Chen; Wanqing Chen; Zhengming Chen; Peggy P Chiang; Odgerel Chimed-Ochir; Rajiv Chowdhury; Costas A Christophi; Ting-Wu Chuang; Sumeet S Chugh; Massimo Cirillo; Thomas K D Claßen; Valentina Colistro; Mercedes Colomar; Samantha M Colquhoun; Alejandra G Contreras; Cyrus Cooper; Kimberly Cooperrider; Leslie T Cooper; Josef Coresh; Karen J Courville; Michael H Criqui; Lucia Cuevas-Nasu; James Damsere-Derry; Hadi Danawi; Lalit Dandona; Rakhi Dandona; Paul I Dargan; Adrian Davis; Dragos V Davitoiu; Anand Dayama; E Filipa de Castro; Vanessa De la Cruz-Góngora; Diego De Leo; Graça de Lima; Louisa Degenhardt; Borja del Pozo-Cruz; Robert P Dellavalle; Kebede Deribe; Sarah Derrett; Don C Des Jarlais; Muluken Dessalegn; Gabrielle A deVeber; Karen M Devries; Samath D Dharmaratne; Mukesh K Dherani; Daniel Dicker; Eric L Ding; Klara Dokova; E Ray Dorsey; Tim R Driscoll; Leilei Duan; Adnan M Durrani; Beth E Ebel; Richard G Ellenbogen; Yousef M Elshrek; Matthias Endres; Sergey P Ermakov; Holly E Erskine; Babak Eshrati; Alireza Esteghamati; Saman Fahimi; Emerito Jose A Faraon; Farshad Farzadfar; Derek F J Fay; Valery L Feigin; Andrea B Feigl; Seyed-Mohammad Fereshtehnejad; Alize J Ferrari; Cleusa P Ferri; Abraham D Flaxman; Thomas D Fleming; Nataliya Foigt; Kyle J Foreman; Urbano Fra Paleo; Richard C Franklin; Belinda Gabbe; Lynne Gaffikin; Emmanuela Gakidou; Amiran Gamkrelidze; Fortuné G Gankpé; Ron T Gansevoort; Francisco A García-Guerra; Evariste Gasana; Johanna M Geleijnse; Bradford D Gessner; Pete Gething; Katherine B Gibney; Richard F Gillum; Ibrahim A M Ginawi; Maurice Giroud; Giorgia Giussani; Shifalika Goenka; Ketevan Goginashvili; Hector Gomez Dantes; Philimon Gona; Teresita Gonzalez de Cosio; Dinorah González-Castell; Carolyn C Gotay; Atsushi Goto; Hebe N Gouda; Richard L Guerrant; Harish C Gugnani; Francis Guillemin; David Gunnell; Rahul Gupta; Rajeev Gupta; Reyna A Gutiérrez; Nima Hafezi-Nejad; Holly Hagan; Maria Hagstromer; Yara A Halasa; Randah R Hamadeh; Mouhanad Hammami; Graeme J Hankey; Yuantao Hao; Hilda L Harb; Tilahun Nigatu Haregu; Josep Maria Haro; Rasmus Havmoeller; Simon I Hay; Mohammad T Hedayati; Ileana B Heredia-Pi; Lucia Hernandez; Kyle R Heuton; Pouria Heydarpour; Martha Hijar; Hans W Hoek; Howard J Hoffman; John C Hornberger; H Dean Hosgood; Damian G Hoy; Mohamed Hsairi; Guoqing Hu; Howard Hu; Cheng Huang; John J Huang; Bryan J Hubbell; Laetitia Huiart; Abdullatif Husseini; Marissa L Iannarone; Kim M Iburg; Bulat T Idrisov; Nayu Ikeda; Kaire Innos; Manami Inoue; Farhad Islami; Samaya Ismayilova; Kathryn H Jacobsen; Henrica A Jansen; Deborah L Jarvis; Simerjot K Jassal; Alejandra Jauregui; Sudha Jayaraman; Panniyammakal Jeemon; Paul N Jensen; Vivekanand Jha; Fan Jiang; Guohong Jiang; Ying Jiang; Jost B Jonas; Knud Juel; Haidong Kan; Sidibe S Kany Roseline; Nadim E Karam; André Karch; Corine K Karema; Ganesan Karthikeyan; Anil Kaul; Norito Kawakami; Dhruv S Kazi; Andrew H Kemp; Andre P Kengne; Andre Keren; Yousef S Khader; Shams Eldin Ali Hassan Khalifa; Ejaz A Khan; Young-Ho Khang; Shahab Khatibzadeh; Irma Khonelidze; Christian Kieling; Daniel Kim; Sungroul Kim; Yunjin Kim; Ruth W Kimokoti; Yohannes Kinfu; Jonas M Kinge; Brett M Kissela; Miia Kivipelto; Luke D Knibbs; Ann Kristin Knudsen; Yoshihiro Kokubo; M Rifat Kose; Soewarta Kosen; Alexander Kraemer; Michael Kravchenko; Sanjay Krishnaswami; Hans Kromhout; Tiffany Ku; Barthelemy Kuate Defo; Burcu Kucuk Bicer; Ernst J Kuipers; Chanda Kulkarni; Veena S Kulkarni; G Anil Kumar; Gene F Kwan; Taavi Lai; Arjun Lakshmana Balaji; Ratilal Lalloo; Tea Lallukka; Hilton Lam; Qing Lan; Van C Lansingh; Heidi J Larson; Anders Larsson; Dennis O Laryea; Pablo M Lavados; Alicia E Lawrynowicz; Janet L Leasher; Jong-Tae Lee; James Leigh; Ricky Leung; Miriam Levi; Yichong Li; Yongmei Li; Juan Liang; Xiaofeng Liang; Stephen S Lim; M Patrice Lindsay; Steven E Lipshultz; Shiwei Liu; Yang Liu; Belinda K Lloyd; Giancarlo Logroscino; Stephanie J London; Nancy Lopez; Joannie Lortet-Tieulent; Paulo A Lotufo; Rafael Lozano; Raimundas Lunevicius; Jixiang Ma; Stefan Ma; Vasco M P Machado; Michael F MacIntyre; Carlos Magis-Rodriguez; Abbas A Mahdi; Marek Majdan; Reza Malekzadeh; Srikanth Mangalam; Christopher C Mapoma; Marape Marape; Wagner Marcenes; David J Margolis; Christopher Margono; Guy B Marks; Randall V Martin; Melvin B Marzan; Mohammad T Mashal; Felix Masiye; Amanda J Mason-Jones; Kunihiro Matsushita; Richard Matzopoulos; Bongani M Mayosi; Tasara T Mazorodze; Abigail C McKay; Martin McKee; Abigail McLain; Peter A Meaney; Catalina Medina; Man Mohan Mehndiratta; Fabiola Mejia-Rodriguez; Wubegzier Mekonnen; Yohannes A Melaku; Michele Meltzer; Ziad A Memish; Walter Mendoza; George A Mensah; Atte Meretoja; Francis Apolinary Mhimbira; Renata Micha; Ted R Miller; Edward J Mills; Awoke Misganaw; Santosh Mishra; Norlinah Mohamed Ibrahim; Karzan A Mohammad; Ali H Mokdad; Glen L Mola; Lorenzo Monasta; Julio C Montañez Hernandez; Marcella Montico; Ami R Moore; Lidia Morawska; Rintaro Mori; Joanna Moschandreas; Wilkister N Moturi; Dariush Mozaffarian; Ulrich O Mueller; Mitsuru Mukaigawara; Erin C Mullany; Kinnari S Murthy; Mohsen Naghavi; Ziad Nahas; Aliya Naheed; Kovin S Naidoo; Luigi Naldi; Devina Nand; Vinay Nangia; K M Venkat Narayan; Denis Nash; Bruce Neal; Chakib Nejjari; Sudan P Neupane; Charles R Newton; Frida N Ngalesoni; Jean de Dieu Ngirabega; Grant Nguyen; Nhung T Nguyen; Mark J Nieuwenhuijsen; Muhammad I Nisar; José R Nogueira; Joan M Nolla; Sandra Nolte; Ole F Norheim; Rosana E Norman; Bo Norrving; Luke Nyakarahuka; In-Hwan Oh; Takayoshi Ohkubo; Bolajoko O Olusanya; Saad B Omer; John Nelson Opio; Ricardo Orozco; Rodolfo S Pagcatipunan; Amanda W Pain; Jeyaraj D Pandian; Carlo Irwin A Panelo; Christina Papachristou; Eun-Kee Park; Charles D Parry; Angel J Paternina Caicedo; Scott B Patten; Vinod K Paul; Boris I Pavlin; Neil Pearce; Lilia S Pedraza; Andrea Pedroza; Ljiljana Pejin Stokic; Ayfer Pekericli; David M Pereira; Rogelio Perez-Padilla; Fernando Perez-Ruiz; Norberto Perico; Samuel A L Perry; Aslam Pervaiz; Konrad Pesudovs; Carrie B Peterson; Max Petzold; Michael R Phillips; Hwee Pin Phua; Dietrich Plass; Dan Poenaru; Guilherme V Polanczyk; Suzanne Polinder; Constance D Pond; C Arden Pope; Daniel Pope; Svetlana Popova; Farshad Pourmalek; John Powles; Dorairaj Prabhakaran; Noela M Prasad; Dima M Qato; Amado D Quezada; D Alex A Quistberg; Lionel Racapé; Anwar Rafay; Kazem Rahimi; Vafa Rahimi-Movaghar; Sajjad Ur Rahman; Murugesan Raju; Ivo Rakovac; Saleem M Rana; Mayuree Rao; Homie Razavi; K Srinath Reddy; Amany H Refaat; Jürgen Rehm; Giuseppe Remuzzi; Antonio L Ribeiro; Patricia M Riccio; Lee Richardson; Anne Riederer; Margaret Robinson; Anna Roca; Alina Rodriguez; David Rojas-Rueda; Isabelle Romieu; Luca Ronfani; Robin Room; Nobhojit Roy; George M Ruhago; Lesley Rushton; Nsanzimana Sabin; Ralph L Sacco; Sukanta Saha; Ramesh Sahathevan; Mohammad Ali Sahraian; Joshua A Salomon; Deborah Salvo; Uchechukwu K Sampson; Juan R Sanabria; Luz Maria Sanchez; Tania G Sánchez-Pimienta; Lidia Sanchez-Riera; Logan Sandar; Itamar S Santos; Amir Sapkota; Maheswar Satpathy; James E Saunders; Monika Sawhney; Mete I Saylan; Peter Scarborough; Jürgen C Schmidt; Ione J C Schneider; Ben Schöttker; David C Schwebel; James G Scott; Soraya Seedat; Sadaf G Sepanlou; Berrin Serdar; Edson E Servan-Mori; Gavin Shaddick; Saeid Shahraz; Teresa Shamah Levy; Siyi Shangguan; Jun She; Sara Sheikhbahaei; Kenji Shibuya; Hwashin H Shin; Yukito Shinohara; Rahman Shiri; Kawkab Shishani; Ivy Shiue; Inga D Sigfusdottir; Donald H Silberberg; Edgar P Simard; Shireen Sindi; Abhishek Singh; Gitanjali M Singh; Jasvinder A Singh; Vegard Skirbekk; Karen Sliwa; Michael Soljak; Samir Soneji; Kjetil Søreide; Sergey Soshnikov; Luciano A Sposato; Chandrashekhar T Sreeramareddy; Nicolas J C Stapelberg; Vasiliki Stathopoulou; Nadine Steckling; Dan J Stein; Murray B Stein; Natalie Stephens; Heidi Stöckl; Kurt Straif; Konstantinos Stroumpoulis; Lela Sturua; Bruno F Sunguya; Soumya Swaminathan; Mamta Swaroop; Bryan L Sykes; Karen M Tabb; Ken Takahashi; Roberto T Talongwa; Nikhil Tandon; David Tanne; Marcel Tanner; Mohammad Tavakkoli; Braden J Te Ao; Carolina M Teixeira; Martha M Téllez Rojo; Abdullah S Terkawi; José Luis Texcalac-Sangrador; Sarah V Thackway; Blake Thomson; Andrew L Thorne-Lyman; Amanda G Thrift; George D Thurston; Taavi Tillmann; Myriam Tobollik; Marcello Tonelli; Fotis Topouzis; Jeffrey A Towbin; Hideaki Toyoshima; Jefferson Traebert; Bach X Tran; Leonardo Trasande; Matias Trillini; Ulises Trujillo; Zacharie Tsala Dimbuene; Miltiadis Tsilimbaris; Emin Murat Tuzcu; Uche S Uchendu; Kingsley N Ukwaja; Selen B Uzun; Steven van de Vijver; Rita Van Dingenen; Coen H van Gool; Jim van Os; Yuri Y Varakin; Tommi J Vasankari; Ana Maria N Vasconcelos; Monica S Vavilala; Lennert J Veerman; Gustavo Velasquez-Melendez; N Venketasubramanian; Lakshmi Vijayakumar; Salvador Villalpando; Francesco S Violante; Vasiliy Victorovich Vlassov; Stein Emil Vollset; Gregory R Wagner; Stephen G Waller; Mitchell T Wallin; Xia Wan; Haidong Wang; JianLi Wang; Linhong Wang; Wenzhi Wang; Yanping Wang; Tati S Warouw; Charlotte H Watts; Scott Weichenthal; Elisabete Weiderpass; Robert G Weintraub; Andrea Werdecker; K Ryan Wessells; Ronny Westerman; Harvey A Whiteford; James D Wilkinson; Hywel C Williams; Thomas N Williams; Solomon M Woldeyohannes; Charles D A Wolfe; John Q Wong; Anthony D Woolf; Jonathan L Wright; Brittany Wurtz; Gelin Xu; Lijing L Yan; Gonghuan Yang; Yuichiro Yano; Pengpeng Ye; Muluken Yenesew; Gökalp K Yentür; Paul Yip; Naohiro Yonemoto; Seok-Jun Yoon; Mustafa Z Younis; Zourkaleini Younoussi; Chuanhua Yu; Maysaa E Zaki; Yong Zhao; Yingfeng Zheng; Maigeng Zhou; Jun Zhu; Shankuan Zhu; Xiaonong Zou; Joseph R Zunt; Alan D Lopez; Theo Vos; Christopher J Murray
Journal:  Lancet       Date:  2015-09-11       Impact factor: 79.321

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.