Literature DB >> 32855839

Transfer Learning for Automated OCTA Detection of Diabetic Retinopathy.

David Le1, Minhaj Alam1, Cham K Yao2, Jennifer I Lim3, Yi-Ting Hsieh4, Robison V P Chan3, Devrim Toslak1,5, Xincheng Yao1,3.   

Abstract

Purpose: To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy.
Methods: A deep-learning convolutional neural network (CNN) architecture, VGG16, was employed for this study. A transfer learning process was implemented to retrain the CNN for robust OCTA classification. One dataset, consisting of images of 32 healthy eyes, 75 eyes with diabetic retinopathy (DR), and 24 eyes with diabetes but no DR (NoDR), was used for training and cross-validation. A second dataset consisting of 20 NoDR and 26 DR eyes was used for external validation. To demonstrate the feasibility of using artificial intelligence (AI) screening of DR in clinical environments, the CNN was incorporated into a graphical user interface (GUI) platform.
Results: With the last nine layers retrained, the CNN architecture achieved the best performance for automated OCTA classification. The cross-validation accuracy of the retrained classifier for differentiating among healthy, NoDR, and DR eyes was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR, and DR eyes were 0.97, 0.98, and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment. Conclusions: With a transfer learning process for retraining, a CNN can be used for robust OCTA classification of healthy, NoDR, and DR eyes. The AI-based OCTA classification platform may provide a practical solution to reducing the burden of experienced ophthalmologists with regard to mass screening of DR patients. Translational Relevance: Deep-learning-based OCTA classification can alleviate the need for manual graders and improve DR screening efficiency. Copyright 2020 The Authors.

Entities:  

Keywords:  artificial intelligence; deep learning; detection; diabetic retinopathy; screening

Mesh:

Year:  2020        PMID: 32855839      PMCID: PMC7424949          DOI: 10.1167/tvst.9.2.35

Source DB:  PubMed          Journal:  Transl Vis Sci Technol        ISSN: 2164-2591            Impact factor:   3.283


Introduction

As the leading cause of preventable blindness in working-age adults, diabetic retinopathy (DR) affects 40% to 45% of diabetic patients. In the United States alone, the number of DR patients is estimated to increase from 7.7 million in 2010 to 14.6 million by 2050. Early detection, prompt intervention, and reliable assessment of treatment outcomes are essential to preventing irreversible visual loss from DR. With early detection and adequate treatment, more than 95% of DR-related vision losses can be prevented. Retinal vascular abnormalities, such as microaneurysms, hard exudates, retinal edema, venous beading, intraretinal microvascular anomalies, and retinal hemorrhages, are common DR findings. Therefore, imaging examination of retinal vasculature is important for DR diagnosis and treatment evaluation. Traditional fundus photography provides limited sensitivity to reveal subtle abnormality correlated with early DR.– Fluorescein angiography (FA) can be used to improve imaging sensitivity of retinal vascular distortions in DR,; however, FA requires intravenous dye injections, which may produce side effects and require additional monitoring and careful management. Optical coherence tomography angiography (OCTA) is a noninvasive method for better visualization of retinal vasculatures. OCTA allows visualization of multiple retinal layers with high resolution; thus, it is more sensitive than FA in detecting subtle vascular distortions correlated with early eye conditions.– The recent development of quantitative OCTA offers a unique opportunity to utilize computer-aided disease detection and artificial intelligence (AI) classification of eye conditions. Quantitative OCTA analysis has been explored for objective assessment of, for example, DR,– age-related macular degeneration (AMD),, vein occlusion,– and sickle cell retinopathy (SCR). Supervised machine learning has also recently been validated for multiple-task classification to differentiate among control, DR, and SCR eyes. In principle, deep learning may provide a simple solution to fostering clinical deployment of AI classification of OCTA images. Deep learning generally refers to the convolutional neural network (CNN) algorithm, which was inspired by the human brain and visual information processing. CNNs contain millions of artificial neurons (also referred to as parameters) to process image features in a feed-forward process by extracting and processing simple features in early layers and complex features in later layers. To train a CNN for a specific classification task requires millions of images to optimize the network parameters. However, for the relatively new imaging modality OCTA, the limitation of currently available images poses an obstacle for practical implementation of deep learning. In order to overcome the limitation of data size, a transfer learning approach has been demonstrated for implementing deep learning. Transfer learning is a training method to adopt some weights of a pretrained CNN and appropriately retrain certain layers of that CNN to optimize the weights for a specific task (i.e., AI classification of retinal images). In fundus photography, transfer learning has been explored to conduct artery–vein segmentation, glaucoma detection,, and diabetic macular thinning assessment. Recently, transfer learning has also been explored in OCT for detecting choroidal neovascularization, diabetic macular edema, and AMD. In principle, transfer learning can involve a single layer or multiple layers, because each layer has weights that can be retrained. For example, the specific number of layers required for retraining in a 16-layer CNN (Fig. 1) may vary, depending on the available dataset and specific task interested. Moreover, compared to traditional fundus photography and OCT, deep learning in OCTA classification is still unexplored due to the limited size of publicly available datasets. In this study, we demonstrate the first use of OCTA for automated classification using deep learning. By leveraging transfer learning, we aim to train a small dataset to achieve reliable DR classification. Furthermore, an easy-to-use graphical user interface (GUI) platform was also developed to foster deep-leaning-based DR classification for adoption in a clinical setting.
Figure 1.

The deep learning CNN used for OCTA DR detection is VGG16, a network that contains 16 trainable layers: convolution (Conv) and fully connected (FC) layers. The corresponding output layer dimensions of each layer is shown below each block. All convolution and fully connected layers are followed by a ReLU activation function. The softmax layer is a fully connected layer that is followed by a softmax activation function. Maxpool and Flatten layers are operational layers with no tunable parameters.

The deep learning CNN used for OCTA DR detection is VGG16, a network that contains 16 trainable layers: convolution (Conv) and fully connected (FC) layers. The corresponding output layer dimensions of each layer is shown below each block. All convolution and fully connected layers are followed by a ReLU activation function. The softmax layer is a fully connected layer that is followed by a softmax activation function. Maxpool and Flatten layers are operational layers with no tunable parameters.

Methods

This study adhered to the tenets of the Declaration of Helsinki and was approved by the institutional review board of the University of Illinois at Chicago (UIC).

Data Acquisition

The 6 × 6-mm2 field of view OCTA data were acquired using an AngioVue spectral-domain OCTA system (Optovue, Fremont, CA, USA) with a 70-kHz A-scan rate, a lateral resolution of ∼15 um, and an axial resolution of ∼5 um. The inclusion criterion was an OCTA acquisition quality of 6 or greater. All OCTA images were qualitatively examined for severe motion or shadow artifacts. Images with significant artifacts were excluded for this study. The OCTA data were exported using ReVue (Optovue) software, and custom-developed Python procedures were used for image processing. The study involved two separate datasets collected with the same recording parameters. The first dataset, consisting of 131 images, was collected at UIC. For external validation, a second dataset, consisting of 46 images, was provided by National Taiwan University.

Classification Model Implementation

The CNN architecture chosen for this study was VGG16. The network specifications and design are illustrated in Figure 1. The pretrained weights were obtained from the ImageNet dataset. The CNN classifier was trained and evaluated using Python 3.7.1 software with the Keras 2.24 application and TensorFlow 1.13.1 open-source platform backend. Training was performed on a Windows 10 computer (Microsoft, Redmond, WA, USA) using the NVIDIA GeForce RTX 2080 Ti graphics processing unit (Santa Clara, CA, USA). To prevent overfitting, each classifier was trained with early stopping, and experimentally the model (i.e., retrained classifier) converges within ∼70 epochs. During each iteration, data augmentation in the form of random rotation, horizontal and vertical flips, and zoom was performed. In this study, manual classifications were used as reference for determining the receiver operating characteristic (ROC) curves and area under the curve (AUC). AUC was used as an index of performance of the model along with sensitivity (SE), specificity (SP), diagnostic accuracy (ACC). To evaluate the performance of each model, fivefold cross-validation was implemented.

Transfer Learning and Model Selection

In practice, it takes hundreds of thousands of data samples (i.e., images) to optimize the millions of parameters of a CNN. Training CNNs on smaller datasets will often lead to overfitting, such that the CNN has memorized the dataset, resulting in high performance when predicting on the same dataset and but failure to perform well on new data. Transfer learning, which leverages the weights in a pretrained network, has been established to overcome the overfitting problem. Transfer learning is well suited for CNNs because CNNs extract features in a bottom-up hierarchical structure. This bottom-up process is analogous to the human visual pathway system. Due to the great dissimilarity between the ImageNet dataset used to pretrain the CNN and our own OCTA dataset, we conducted a transfer learning process to determine the appropriate number of retrained layers required in a CNN to achieve robust performance of OCTA classification. For this study, misclassification error was used for quantitative assessment of the CNN performance, and the one standard deviation rule was used for model selection. This rule states that a model selected must be within one positive standard deviation from the misclassification error of the best performing model. The model requiring the least number of retrained layers for performance comparable to that of the best trained model was selected.

Results

Patient Demographics

Subjects and diabetes mellitus (DM) patients with and without DR were recruited from the UIC retina clinic. The patients presented in this study are representative of a university population of DM patients who require clinical diagnosis and management of DR. Two board-certified retina specialists classified the patients based on the severity of DR according to the Early Treatment Diabetic Retinopathy Study staging system. All patients underwent complete anterior and dilated posterior segment examination (JIL, RVPC). All control OCTA images were obtained from healthy volunteers that provided informed consent for OCT and OCTA imaging. All subjects underwent OCT and OCTA imaging of both eyes (OD and OS). The images used in this study did not include images of eyes with other ocular diseases or any other pathological features in their retina such as epiretinal membranes and macular edema. Additional exclusion criteria included eyes with a prior history of intravitreal injections, vitreoretinal surgery, or significant (greater than a typical blot hemorrhage) macular hemorrhages. Subject and patient characteristics, including sex, age, duration of diabetes, diabetes type, hemoglobin A1c (HbA1c) status, and hypertension prevalence, are summarized in Table 1.
Table 1.

Demographics of OCTA Dataset

DemographicControlNo DRMild DRModerate DRSevere DR
Subjects (n)2017202020
Sex, male/female (n)12/86/11111211
Age (y), mean ± SD42 ± 9.866.4 ± 10.1450.10 ± 12.6150.80 ± 8.3957.84 ± 10.37
Age range (y)25–7149–8624–7432–6841–73
Duration of diabetes (y), mean ± SD19.64 ± 13.2716.13 ± 10.5823.40 ± 11.95
Diabetes type II (%)100100100100
Insulin dependent, Y/N (n)14/37/1312/815/5
Hba1c (%), mean ± SD5.9 ± 0.76.5 ± 0.67.3 ± 0.97.8 ± 1.3
Hypertension (%)1017458080
Demographics of OCTA Dataset This study included 24 eyes from 17 patients with diabetes but no DR (NoDR), 75 eyes from 60 patients with DR, and 32 healthy eyes from 20 control subjects. For all subjects and patients, OCTA features were quantified and are summarized in Table 2. In this study, this dataset is referred to as the cross-validation dataset.
Table 2.

Quantification of Individual OCTA Features

Mean ± SD
FeatureControlNo DRMild DRModerate DRSevere DR
SCP
 BVT1.11 ± 0.071.09 ± 0.011.14 ± 0.051.17 ± 0.061.23 ± 0.04
 BVC (µm)40.16 ± 0.5141.28 ± 0.6341.17 ± 1.3540.95 ± 0.5841.29 ± 1.09
 VPI29. 77 ± 1.5231.04 ± 1.5828.30 ± 2.0928.91 ± 2.0227.33 ± 3.30
BVD (%)
 SCP
  C1, 2 mm56.93 ± 4.0742.33 ± 7.4850.44 ± 8.7452.34 ± 5.7144.40 ± 7.67
  C2, 4 mm56.49 ± 2.6955.27 ± 4.0054.09 ± 4.9054.93 ± 4.0652.47 ± 4.84
  C3, 6 mm54.45 ± 2.4555.36 ± 3.1452.77 ± 3.5553.71 ± 3.9452.54 ± 5.04
 DCP
  C1, 2 mm75.52 ± 3.7063.03 ± 6.9564.97 ± 8.6067.22 ± 5.5257.36 ± 8.46
  C2, 4 mm78.37 ± 3.8771.52 ± 5.5970.25 ± 6.4570.17 ± 5.1262.50 ± 7.62
  C3, 6 mm76.70 ± 4.9371.45 ± 6.0268.08 ± 6.6267.11 ± 5.2360.77 ± 7.72
 FAZ-A
  SCP (mm2)0.30 ± 0.060.37 ± 0.160.33 ± 0.050.38 ± 0.070.46 ± 0.06
  DCP (mm2)0.39 ± 0.080.40 ± 0.140.46 ± 0.070.53 ± 0.120.58 ± 0.09
 FAZ-CI
  SCP1.14 ± 0.111.14 ± 0.041.29 ± 0.141.38 ± 0.141.46 ± 0.18
  DCP1.18 ± 0.121.09 ± 0.021.31 ± 0.211.42 ± 0.191.49 ± 0.17

SCP, superficial capillary plexus; BVT, blood vessel tortuosity; BVC, blood vessel caliber; VPI, vessel perimeter index; BVD, blood vessel density; C1, C2, and C3, three circular zones; DCP, deep capillary plexus; FAZ-A, foveal avascular zone area; FAZ-CI, foveal avascular zone contour irregularity.

Quantification of Individual OCTA Features SCP, superficial capillary plexus; BVT, blood vessel tortuosity; BVC, blood vessel caliber; VPI, vessel perimeter index; BVD, blood vessel density; C1, C2, and C3, three circular zones; DCP, deep capillary plexus; FAZ-A, foveal avascular zone area; FAZ-CI, foveal avascular zone contour irregularity.

Model Selection

A model selection process was used to identify the required number of retrained layers for the best performance of OCTA classification. The layers in the VGG16 CNN were sequentially retrained starting from the last to the beginning layers. A quantitative comparison of misclassification errors for individual models (i.e., variable number of retrained layers) is provided in Figure 2. In this study, we evaluated each model using the cross-validation dataset. We applied the one standard deviation rule, which recommends choosing a less complex model that is within one standard deviation of the misclassification error of the best performing model. The model retrained with nine layers conformed with the one standard deviation rule and was further examined in a cross-validation study.
Figure 2.

A transfer learning performance study was conducted to determine how many layers are necessary for effective transfer learning in OCTA images. Our model consisted of 16 retrainable layers. The additional graph in the right-hand corner shows that retraining nine layers satisfies the criteria of the one positive standard deviation rule.

A transfer learning performance study was conducted to determine how many layers are necessary for effective transfer learning in OCTA images. Our model consisted of 16 retrainable layers. The additional graph in the right-hand corner shows that retraining nine layers satisfies the criteria of the one positive standard deviation rule.

Cross-Validation Study

Using a fivefold cross-validation method, we evaluated the performance of the selected nine layers retrained on our cross-validation dataset. The cross-validation performance summarized in Tables 3 and 4 reveals that this model achieved an average of 87.27% ACC, 83.76% SE, and 90.82% SP among the three categories of control, NoDR, and DR. For the individual class predictions, NoDR had the highest accuracy, followed by control and then DR. This performance is reflected in the ROC graphs and AUC in Figure 3. For our cross-validation study, the overall AUC was 0.96, with individual AUC values for control, NoDR, and DR of 0.97, 0.98, and 0.97, respectively.
Table 3.

Cross-Validation Multi-Label Confusion Matrix (n = 131)

Predicted Label
True LabelControlNo DRDR
Control2534
NoDR1230
DR9858
Table 4.

Cross-Validation Evaluation Metrics (n = 131)

Mean ± SD
MetricControlNo DRDRAverage
ACC (%)87.022 ± 0.05990.840 ± 0.02083.970 ± 0.05087.277 ± 0.034
SE (%)78.123 ± 0.15295.835 ± 0.08977.334 ± 0.07683.764 ± 0.105
SP (%)89.899 ± 0.05089.720 ± 0.02292.858 ± 0.04190.825 ± 0.018
Figure 3.

ROC curves for the cross-validation performance of the model for individual class performance (control, NoDR, and DR) and the average performance of the model.

Cross-Validation Multi-Label Confusion Matrix (n = 131) Cross-Validation Evaluation Metrics (n = 131) ROC curves for the cross-validation performance of the model for individual class performance (control, NoDR, and DR) and the average performance of the model.

External Validation

In addition to cross-validation, we validated our CNN classifier using an external dataset from a cohort of NoDR and DR patients from National Taiwan University Hospital. The dataset was comprised of 13 NoDR patients (eight males), with an average age of 62 ± 8.35 years, and 21 DR patients (12 males), with an average age of 61 ± 12.80 years. Other clinical information regarding the dataset was not available to us. The validation results revealed an average of 70.83 ± 0.021% ACC, 67.23 ± 0.026% SE, and 73.96 ± 0.026% SP. The confusion matrix for the validation dataset is summarized in Table 5.
Table 5.

External Validation Multi-Label Confusion Matrix (n = 46)

Predicted Label
True LabelControlNo DRDR
Control
NoDR1136
DR1718
External Validation Multi-Label Confusion Matrix (n = 46)

Evaluation of Clinical Deployment

The model with the best performance was integrated with a custom-designed GUI platform to evaluate the potential of AI classification of DR using OCTA in a clinical environment. The GUI was developed using Java 11.01, an open source programming language. As shown in Figure 4, the GUI adopted an interface that is commonly used in retina clinics to enable easy adoption by ophthalmic personnel. By clicking the “Load image” button, one OCTA image can be selected (Fig. 4A) and displayed (Fig. 4B) for visual examination. By clicking the “Predict” button, automated AI classification can be performed. The output of the DR classification is displayed in the “Results” box. Additional information about classification confidence is also available for clinical reference. The GUI platform has been tested by three ophthalmologists (JIL, RPVC, and DT) to verify the feasibility of applying the deep-learning-based AI classification of DR using OCTA in a clinical environment. For each OCTA image, the GUI-based classification can be completed within 1 minute.
Figure 4.

GUI platform for DR classification using OCTA.

GUI platform for DR classification using OCTA.

Discussion

In this study, a deep learning model was trained using transfer learning for automated classification of OCTA images for three categories: healthy control, NoDR, and DR. In our study, we utilized a selection processes to determine the optimal number of layers for fine-tuning. This selection process revealed that fine-tuning nine layers is optimal. To evaluate this model, we performed a fivefold cross-validation. The results of cross-validation were promising; however, the cross-validation dataset may contain some inherent biases, such as the inclusion of both eyes from one subject. To verify our classifier, we tested an external dataset to confirm the performance of automated DR classification. Current research for deep learning in ophthalmology has focused on using fundus photography and OCT. Transfer learning has been demonstrated as a means to train CNNs with fundus images to predict the future DR progression. Recently, Li et al. employed transfer learning to train a CNN for detecting referable DR from non-referable DR in fundus images. Similarly, Lu et al. trained a CNN for classification of OCT scans to detect retinal abnormalities, such as serous macular detachment, cystoid macular edema, macular hole, and epiretinal membrane. All of these studies reported good classification (>90% accuracy) when utilizing transfer learning to train a relatively large dataset (>500 images per cohort). One study by Eltanboly et al. proposed a deep learning computer-aided design system for the detection of DR in OCT scans. That study utilized a dataset of 52 clinical OCT scans and achieved 92% accuracy in fourfold cross-validation and 100% accuracy on one hold-out validation. Although these results are promising, that study did not use an external database to validate. In comparison to our study, the cross-validation performance revealed 87.65% accuracy and an external validation accuracy of 70.83%. It should be noted that the compromised classification performance could indicate that cross-validation alone is not substantial enough for validating deep learning models, and external validation is highly important for future studies. Recent work related to the classification of DR in OCTA has primarily used supervised machine learning methods., However, the disadvantage of supervised machine learning approaches is that the user must extract quantitative features, such as blood vessel density, blood vessel tortuosity, or foveal avascular zone area. On the other hand, deep learning requires minimal user input. A user, such as a clinician or technician, inputs an image, and the CNN can extract features from the input image to output a prediction. With the advantage of being easy to use, deep learning can work well for mass-screening programs. To the best of our knowledge, this study is the first exploration of using deep learning for OCTA classification of DR. There are some limitations for this current study. The dataset used to train the CNN models was acquired from one device and may contain biases. This is evident in the external validation, which revealed a lower accuracy compared to the cross-validation accuracy. This decreased accuracy may be due to the differences between ethnicities. For example, the cross-validation dataset largely consisted of cohorts from an African and Hispanic American population, whereas the external dataset consisted primarily of cohorts from an East Asian population. The next step to address this issue would be to create a dataset consisting of multiple devices from different populations, but to accomplish this multiple-institution collaboration is required. One limitation of this study (and deep learning in general) is the lack of interpretability. The CNN can make a prediction, but the user, such as the physician, will not know how the CNN inferred its prediction. Based on our work and other published studies, as well as future studies involving AI technologies, researchers could use tools such as occlusion maps to help clinicians understand how the CNN made its prediction, in addition to utilizing external validation in order to build trust and confidence. Based on the results of this study, we incorporated our trained CNN into a custom GUI to evaluate the potential of using automated AI classification in a clinical setting. The design of the GUI was inspired by commonly used software in the UIC retinal clinic and was developed using open source software. By demonstrating the potential of quick deployment of AI technologies and with the promising results in our study and other published studies, we hope to build confidence and foster the use of AI technologies in a clinical setting. Deep learning can help clinicians gain valuable time when screening patients and can reduce the need for manual graders. In the future, AI can potentially be used to assist clinicians in understanding pathological mechanisms and the development of new treatment options.
  36 in total

1.  Artery-vein segmentation in fundus images using a fully convolutional network.

Authors:  Ruben Hemelings; Bart Elen; Ingeborg Stalmans; Karel Van Keer; Patrick De Boever; Matthew B Blaschko
Journal:  Comput Med Imaging Graph       Date:  2019-06-15       Impact factor: 4.790

2.  Quantitative characteristics of sickle cell retinopathy in optical coherence tomography angiography.

Authors:  Minhaj Alam; Damber Thapa; Jennifer I Lim; Dingcai Cao; Xincheng Yao
Journal:  Biomed Opt Express       Date:  2017-02-23       Impact factor: 3.732

3.  A fluorescein angiographic study of macular dysfunction secondary to retinal vascular disease. VI. X-ray irradiation, carotid artery occlusion, collagen vascular disease, and vitritis.

Authors:  J D Gass
Journal:  Arch Ophthalmol       Date:  1968-11

4.  Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.

Authors:  Ştefan Ţălu; Dan Mihai Călugăru; Carmen Alina Lupaşcu
Journal:  Int J Ophthalmol       Date:  2015-08-18       Impact factor: 1.779

5.  Quantitative Optical Coherence Tomography Angiography Features and Visual Function in Eyes With Branch Retinal Vein Occlusion.

Authors:  Wasim A Samara; Abtin Shahlaee; Jayanth Sridhar; M Ali Khan; Allen C Ho; Jason Hsu
Journal:  Am J Ophthalmol       Date:  2016-03-31       Impact factor: 5.258

6.  Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning.

Authors:  Daniel S Kermany; Michael Goldbaum; Wenjia Cai; Carolina C S Valentim; Huiying Liang; Sally L Baxter; Alex McKeown; Ge Yang; Xiaokang Wu; Fangbing Yan; Justin Dong; Made K Prasadha; Jacqueline Pei; Magdalene Y L Ting; Jie Zhu; Christina Li; Sierra Hewett; Jason Dong; Ian Ziyar; Alexander Shi; Runze Zhang; Lianghong Zheng; Rui Hou; William Shi; Xin Fu; Yaou Duan; Viet A N Huu; Cindy Wen; Edward D Zhang; Charlotte L Zhang; Oulan Li; Xiaobo Wang; Michael A Singer; Xiaodong Sun; Jie Xu; Ali Tafreshi; M Anthony Lewis; Huimin Xia; Kang Zhang
Journal:  Cell       Date:  2018-02-22       Impact factor: 41.582

7.  Biomarkers of Peripheral Nonperfusion in Retinal Venous Occlusions Using Optical Coherence Tomography Angiography.

Authors:  Diogo Cabral; Florence Coscas; Agnes Glacet-Bernard; Telmo Pereira; Carlos Geraldes; Francisco Cachado; Ana Papoila; Gabriel Coscas; Eric Souied
Journal:  Transl Vis Sci Technol       Date:  2019-05-02       Impact factor: 3.283

8.  Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm.

Authors:  Feng Li; Zheng Liu; Hua Chen; Minshan Jiang; Xuedian Zhang; Zhizheng Wu
Journal:  Transl Vis Sci Technol       Date:  2019-11-12       Impact factor: 3.283

9.  Generating retinal flow maps from structural optical coherence tomography with artificial intelligence.

Authors:  Cecilia S Lee; Ariel J Tyring; Yue Wu; Sa Xiao; Ariel S Rokem; Nicolaas P DeRuyter; Qinqin Zhang; Adnan Tufail; Ruikang K Wang; Aaron Y Lee
Journal:  Sci Rep       Date:  2019-04-05       Impact factor: 4.379

Review 10.  Modern technologies for retinal scanning and imaging: an introduction for the biomedical engineer.

Authors:  Boris I Gramatikov
Journal:  Biomed Eng Online       Date:  2014-04-29       Impact factor: 2.819

View more
  13 in total

1.  Optical coherence tomography image based eye disease detection using deep convolutional neural network.

Authors:  Rakesh Kumar; Meenu Gupta
Journal:  Health Inf Sci Syst       Date:  2022-06-21

2.  Emerging imaging developments in experimental vision sciences and ophthalmology.

Authors:  Shuliang Jiao; Yali Jia; Xincheng Yao
Journal:  Exp Biol Med (Maywood)       Date:  2021-08-18

3.  MF-AV-Net: an open-source deep learning network with multimodal fusion options for artery-vein segmentation in OCT angiography.

Authors:  Mansour Abtahi; David Le; Jennifer I Lim; Xincheng Yao
Journal:  Biomed Opt Express       Date:  2022-08-22       Impact factor: 3.562

4.  A Novel Computer-Aided Diagnostic System for Early Detection of Diabetic Retinopathy Using 3D-OCT Higher-Order Spatial Appearance Model.

Authors:  Mohamed Elsharkawy; Ahmed Sharafeldeen; Ahmed Soliman; Fahmi Khalifa; Mohammed Ghazal; Eman El-Daydamony; Ahmed Atwan; Harpal Singh Sandhu; Ayman El-Baz
Journal:  Diagnostics (Basel)       Date:  2022-02-11

Review 5.  Artificial intelligence in OCT angiography.

Authors:  Tristan T Hormel; Thomas S Hwang; Steven T Bailey; David J Wilson; David Huang; Yali Jia
Journal:  Prog Retin Eye Res       Date:  2021-03-22       Impact factor: 21.198

6.  Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images.

Authors:  A Sharafeldeen; M Elsharkawy; F Khalifa; A Soliman; M Ghazal; M AlHalabi; M Yaghi; M Alrahmawy; S Elmougy; H S Sandhu; A El-Baz
Journal:  Sci Rep       Date:  2021-02-25       Impact factor: 4.379

Review 7.  Machine learning in optical coherence tomography angiography.

Authors:  David Le; Taeyoon Son; Xincheng Yao
Journal:  Exp Biol Med (Maywood)       Date:  2021-07-19

8.  Deep Learning to Distinguish ABCA4-Related Stargardt Disease from PRPH2-Related Pseudo-Stargardt Pattern Dystrophy.

Authors:  Alexandra Miere; Olivia Zambrowski; Arthur Kessler; Carl-Joe Mehanna; Carlotta Pallone; Daniel Seknazi; Paul Denys; Francesca Amoroso; Eric Petit; Eric H Souied
Journal:  J Clin Med       Date:  2021-12-08       Impact factor: 4.241

Review 9.  Optical Coherence Tomography Angiography in Diabetic Patients: A Systematic Review.

Authors:  Ana Boned-Murillo; Henar Albertos-Arranz; María Dolores Diaz-Barreda; Elvira Orduna-Hospital; Ana Sánchez-Cano; Antonio Ferreras; Nicolás Cuenca; Isabel Pinilla
Journal:  Biomedicines       Date:  2021-12-31

10.  Deep-learning approach for automated thickness measurement of epithelial tissue and scab using optical coherence tomography.

Authors:  Yubo Ji; Shufan Yang; Kanheng Zhou; Holly R Rocliffe; Antonella Pellicoro; Jenna L Cash; Ruikang Wang; Chunhui Li; Zhihong Huang
Journal:  J Biomed Opt       Date:  2022-01       Impact factor: 3.758

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.