Literature DB >> 32855845

Leveraging Multimodal Deep Learning Architecture with Retina Lesion Information to Detect Diabetic Retinopathy.

Vincent S Tseng1,2, Ching-Long Chen3, Chang-Min Liang3, Ming-Cheng Tai3, Jung-Tzu Liu4, Po-Yi Wu4, Ming-Shan Deng4, Ya-Wen Lee4, Teng-Yi Huang4, Yi-Hao Chen3.   

Abstract

Purpose: To improve disease severity classification from fundus images using a hybrid architecture with symptom awareness for diabetic retinopathy (DR).
Methods: We used 26,699 fundus images of 17,834 diabetic patients from three Taiwanese hospitals collected in 2007 to 2018 for DR severity classification. Thirty-seven ophthalmologists verified the images using lesion annotation and severity classification as the ground truth. Two deep learning fusion architectures were proposed: late fusion, which combines lesion and severity classification models in parallel using a postprocessing procedure, and two-stage early fusion, which combines lesion detection and classification models sequentially and mimics the decision-making process of ophthalmologists. Messidor-2 was used with 1748 images to evaluate and benchmark the performance of the architecture. The primary evaluation metrics were classification accuracy, weighted κ statistic, and area under the receiver operating characteristic curve (AUC).
Results: For hospital data, a hybrid architecture achieved a good detection rate, with accuracy and weighted κ of 84.29% and 84.01%, respectively, for five-class DR grading. It also classified the images of early stage DR more accurately than conventional algorithms. The Messidor-2 model achieved an AUC of 97.09% in referral DR detection compared to AUC of 85% to 99% for state-of-the-art algorithms that learned from a larger database. Conclusions: Our hybrid architectures strengthened and extracted characteristics from DR images, while improving the performance of DR grading, thereby increasing the robustness and confidence of the architectures for general use. Translational Relevance: The proposed fusion architectures can enable faster and more accurate diagnosis of various DR pathologies than that obtained in current manual clinical practice. Copyright 2020 The Authors.

Entities:  

Keywords:  convolutional neural network; diabetic retinopathy; fundus image; fusion architecture; object detection

Mesh:

Year:  2020        PMID: 32855845      PMCID: PMC7424907          DOI: 10.1167/tvst.9.2.41

Source DB:  PubMed          Journal:  Transl Vis Sci Technol        ISSN: 2164-2591            Impact factor:   3.283


Introduction

Diabetic retinopathy (DR) is a sight-threatening disease; however, timely diagnosis in the early stage can reduce the occurrence of vision loss or blindness by spurring timely medical intervention and management of glucose levels and blood pressure.– Long-term diabetes is likely to cause DR that can impair the microvascular transport of blood and nutrients to the retina, causing it to leak or swell and eventually lead to blindness. Taiwan's National Health Insurance system recommends annual fundus examination for diabetic patients to detect DR. The examination rate is low because most patients are unaware of their condition until they experience vision reduction., To increase adherence to the examination, a one-stop service consisting of primary care and retinal imaging has been established. However, several other issues remain to be addressed in DR detection. The first issue is expertise: a well-trained ophthalmologist is required for DR grading and lesion type assessment., The second issue is intergrader reliability: human interpretation of imaging varies among ophthalmologists., The third issue is manpower: the compound annual growth rate (CAGR) of the number of eye doctors (CAGR: 2.60%) is lower than that of the diabetic population in Taiwan (CAGR: 4.78%)., Thus, there is an urgent and critical need for an artificial intelligence–based approach to support decision making. Therefore, developing a robust and automated grading system for DR that gives a prompt response is required to support frontline clinicians who are not experts in the ophthalmology field. This would reduce the clinicians’ workload and alleviate the personnel insufficiency associated with a large number of diabetic patients. The DR severity level determination is based on the observed findings from the fundus image. The International Clinical Diabetic Retinopathy Disease Severity (ICDR) Scale has been widely used to identify patients with signs related to the types of DR lesions, such as microaneurysms (MA), hemorrhages (H), and exudates (EX)., According to the signs and distribution of lesions of the ICDR scale definition, the DR severity can be divided into five levels: no apparent retinopathy, mild nonproliferative DR (NPDR), moderate NPDR, severe NPDR, and proliferative DR (PDR). A patient need not be referred to an ophthalmologist if his or her eye could be graded as nonreferable DR (less than moderate NPDR); the patient could be referred only for referable DR (moderate/severe NPDR and PDR). The early signs of DR are MA, H, EX, and so on., MA is the first clinical sign of DR and the only characteristic in mild NPDR. Therefore, MA recognition is critical in the clinical management of DR and patient education. Hence, in addition to a convolutional neural network (CNN)–based grading model, we focus on lesion information as a complementary feature to improve the DR severity classification. Previous algorithms incorporating lesion information have shown promising results,, but their inference speed is hindered by the patch-based method., Hence, we propose two CNN-based fusion architectures, instead of using lesion patches as the inputs, to support DR grading efficiently. To understand better the signs of lesions and which types and distributions affect the DR severity, we explored whether the proposed architectures can increase the robustness and interpretability of DR severity classification. Two architectures are proposed: a late fusion method to combine two deep learning architectures by a postprocessing procedure and a two-stage early fusion method to exploit lesion localization at pixel level for DR classification. Assuming that the extracted neighborhood context of lesions enhances the classification performance, the lesion detection or localization may support clinical diagnosis, especially for subtle lesion detection in the early stages of DR. As such, we aimed to identify the DR severity of Taiwanese diabetic patients using fundus images from 2007 to 2018 with added lesion information via an improved hybrid recognition method.

Methods

This section presents detailed information on the collected database and proposes two different architectures for fusing both lesion information and a grading network for DR classification. First, a late fusion architecture combines the grading model and lesion-classification model via a postprocessing procedure. Second, a two-stage early fusion architecture highlights the suspicious DR lesions and produces fully weighted lesion images in the first stage. Then, raw images and fully weighted images are trained jointly in the second stage for DR grading.

Database

This study used two data sets: a private data set from three Taiwan hospitals and a public data set, Messidor-2. For the private data set, we used 26,699 fundus images obtained from 17,834 patients between 2007 and 2018 at Tri-Service General Hospital, Chung Shan Medical University Hospital, and China Medical University Hospital. The hospitals’ institutional review boards and the Industrial Technology Research Institute approved this study, and the research followed the tenets of the Declaration of Helsinki. The need for informed consent was waived owing to the retrospective nature of the study. A variety of ophthalmoscopes were used with 45° fields of view. A group of board-certified ophthalmologists independently graded the images based on the ICDR scale, and they annotated the corresponding lesions. The private data set was randomly split into three independent data sets based on patient IDs: training set (22,617 images), validation set (2039 images), and testing set (2043 images). The distributions of the five-class DR severity and four-type DR lesion are shown in Figure 1. A new distribution of Messidor-2,, with 1748 images (78.26% nonreferable DR and 21.74% referable DR), was used for the testing as well.
Figure 1.

Workflow diagram showing distribution of DR severity level and the incidence rate of DR lesions in a different data set.

Workflow diagram showing distribution of DR severity level and the incidence rate of DR lesions in a different data set.

Ground Truth

For the private data set, the ground truth (GT) of disease severity for each image was based on the majority consensus of the three ophthalmologists. If there was an ungradable image or an image without a majority consensus for a five-class classification, that image was removed to minimize grading bias. The percentage of such removals was 43%. A total of 26,699 images were used after dropout (Fig. 1). The GT of lesion location for each image was based on the following rules: (1) bounding boxes for the image labeled by two ophthalmologists are compared. If the same symptom is marked and the intersection over union (IoU) >25%, then the intersection area is taken as the GT. (2) Otherwise, the marked symptoms are retained as the GT. (3) The GT obtained from the previous steps is compared with the image marked by the third ophthalmologist, and then the GT is updated. Figure 2 shows the annotated lesion combination process between two ophthalmologists. Based on the rules, the final GT distribution of DR lesion types by DR severity level could be determined. As seen in Figure 3, lesions are marked in the majority level of no DR. It implies that one of the graders marked lesion(s) and the other two graders marked no lesion in the same image. It is worth noting that the number of lesions in severe NPDR and PDR is less than that in moderate NPDR. This arises because the invasion of the area is usually greater in the severe levels of DR, and the relative number of lesions may decrease. Furthermore, the signs of neovascularization should be taken into consideration in the judgment of PDR for a complete research.
Figure 2.

Lesion location GT production process. (a) Lesion annotated by two ophthalmologists (D1 and D2). (b) Rule-based combination results.

Figure 3.

Distribution of DR lesion types by DR severity level.

Lesion location GT production process. (a) Lesion annotated by two ophthalmologists (D1 and D2). (b) Rule-based combination results. Distribution of DR lesion types by DR severity level. For the public data set, Messidor-2, the grades made available by Abramoff were adopted in this study.

Late Fusion

To prepare useful information in the training process, image preprocessing was conducted in which the nonretinal background was cropped from raw images. As can be seen in Figure 4, we developed a late fusion model (M1) in which the grading model (baseline model, M0) and the four lesion type-classification models were trained independently with the cropped images. With images of size 299 × 299 as the inputs, we trained a CNN model using the Inception-v4 architecture for grading and another four CNN models using the DenseNet architecture with images of size 224 × 224 as the inputs for binary lesion classification.
Figure 4.

Workflow of the baseline model (M0) and four fusion models (M1–M4).

Workflow of the baseline model (M0) and four fusion models (M1–M4). The lesion-classification models (with or without a lesion type of MA, H, hard exudates [HE], or soft exudates [SE]) were used for feature extraction. The lesion-classification features were deemed as supplementary information by the late fusion architecture. As the features from the softmax regression were the final outputs produced from the heterogeneous models (grading model and four binary lesion-classification models), a postprocessing method combined all the features with an ordinal ridge regression model to classify the disease severity (Fig. 4).

Two-Stage Early Fusion

Inspired by previous work, we also developed a two-stage early fusion architecture (M2/M3) in which two different types of input images are used for grading DR. As can be seen in Figure 4, we used the raw input images instead of lesion patches for training an object-detection model based on RetinaNet with images of size 1216 × 1216 in the first stage. Our object-detection model was trained to enhance the four major symptoms of suspicious DR regions in a full image (Fig. 5). In the second stage, a classification model using Inception-v4 with lesion-enhanced images and raw images of size 299 × 299 was simultaneously trained for severity classification. Both features were concatenated before the fully connected fusion layer.
Figure 5.

Input images: (a) raw image and (b) enhanced image with highlighted lesion locations (MA, H, and SE).

Input images: (a) raw image and (b) enhanced image with highlighted lesion locations (MA, H, and SE). Specifically, we replaced the raw RGB pixels with new pixels to highlight the potential DR lesions in the first stage. To enhance the suspicious DR lesions from the predicted regions, first, we divided the original image RGB matrix by 4; second, we multiplied it by the lesion type based on the predicted annotation; and third, we multiplied it by a function f(x) based on the confidence level information (c) of the detected lesion. The confidence level of the detector may suppress the degree of enhancement. Therefore, two enhancement strategies were used: strategy 1 (S1) using the confidence level (x,  x ∈ [0, 1]) as additional information and strategy 2 (S2) without using the confidence level. The formula to calculate the new RGB pixel is shown in Equation (1): where lesion type is 1 (for no DR), 2 (for H), 3 (for HE/SE), or 4 (for MA), and f(x) is defined as . As Figure 6 shows, in S1, if an observed confidence level of the four main symptoms with a value of 0.5 and a raw pixel value of 255 existed for each symptom, then the new pixel values were updated as 127.5, 63.75, and 95.625, respectively. Alternatively, using S2 in M2, the new weighted pixel values of the four main symptoms were increased to 255, 127.5, and 191.25. The differences among the pixel values of different DR lesions are, therefore, elevated without information suppression, using S2.
Figure 6.

Strategies of the potential DR lesions extraction. Blue dots: H; yellow dots: HE or SE; red dots: MA.

Strategies of the potential DR lesions extraction. Blue dots: H; yellow dots: HE or SE; red dots: MA. Furthermore, for early DR detection purposes in model M3, we focused on MA detection alone and modified Equation (1) to obtain Equation (2): where lesion type can be 1 (for no DR) or 2 (for MA), and f (x) =  1. Image artifacts may influence the performance of MA detection because of the presence of dust or dirt. The morphology of these artifacts is similar to MA in terms of color and size. Hence, we filtered the images through an object-detection model to remove dust or dirt particles before producing the MA-enhanced images. Finally, model M4 combines the binary lesion type information and features from the enhanced image for performance enhancement.

Data Analysis

We analyzed the performance of both the binary lesion type-classification model and the referable/nonreferable DR model for image-level recognition by calculating accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Note that in the binary lesion type classification, “true positive” denotes one of the locations of a predicted lesion having an IoU >15% compared to the GT location, “true negative” denotes both GT and prediction without any lesion detection, “false positive” denotes GT without any lesion detection but with prediction, and “false negative” denotes GT with at least one location but no prediction or any prediction location having an IoU ≤15%. The IoU should be larger than 15% to include the lesions because the pixel size of MA usually is within 3 × 3, and the bounding box size of GT is around 7 × 7. Figure 7 shows an example of IoU that approximates 15%. The threshold definition of IoU is reasonable and rigorous because several studies published in the literature used at least one pixel overlaps with a GT, that an image is considered to contain a target lesion. Moreover, the accuracy and weighted κ with Fleiss-Cohen κ coefficient weights were calculated to evaluate the performances of the fusion architectures in five-class disease severity classification.
Figure 7.

Closeup of MA. Example of IoU smaller than 0.15. The larger bounding box is produced by GT; the smaller bounding box is produced by a prediction model.

Closeup of MA. Example of IoU smaller than 0.15. The larger bounding box is produced by GT; the smaller bounding box is produced by a prediction model. The benchmark data set, Messidor-2, was used to identify the performance of a hybrid model, which has the best results from five-class classification for binary classification (nonreferable DR versus referable DR), based on the calculation of accuracy, AUC, sensitivity, and specificity.

Results

For the private data set of 2043 images, the lesion-classification model detected the DR symptoms with an AUC greater than 81% for each symptom (Table 1). The sensitivity in detecting one of the symptoms was greater than 65% and the specificity in detecting the absence of the symptoms correctly was greater than 80%. This classification model detected the true negatives more than the true positives.
Table 1.

Performance of Binary Lesion Type-Classification Model at the Image Level

Lesion TypeAccuracy (%)AUC (%)Sensitivity (%)Specificity (%)
MA77.04a81.3269.9080.14
H87.0890.0679.1490.23
HE79.5982.2664.5783.95
SE81.7987.9077.4482.09

Data are means.

Performance of Binary Lesion Type-Classification Model at the Image Level Data are means. We also explored the effectiveness of four fusion models for DR grading. For the two- and five-class severity classification, a comparison between the baseline model (M0) and the proposed fusion models (M1–M4) is summarized in Table 2. The performance of M0 in terms of accuracy and weighted κ was 81.60% and 80.09%, respectively. The late fusion model M1 integrated the four major DR symptoms, slightly increasing the accuracy and weighted κ. The results of the early fusion models, M2 and M3, were similar; however, M2 decreased the misclassification rate at a severity level of mild NPDR and maintained the rate at moderate NPDR (data not shown). For the early detection of referrals, in M4, we combined the features from the lesion-classification (M1) and early fusion (M2) models with a regression model to obtain an output accuracy of 92.95% and AUC of 95.51%, which is better than that of the other models. We also found that S2 yielded better results than S1 (data not shown).
Table 2.

Performance Comparison of the Baseline Model (M0) and the Proposed Fusion Models (M1–M4)

Five-ClassTwo-Class
ModelAccuracyWeighted κAccuracyAUCSensitivitySpecificity
M081.6080.0992.1294.1980.9894.92
M181.6981.1992.2295.0882.2094.73
M284.2483.8692.2795.0690.4992.71
M385.1284.4391.0994.2190.9891.12
M484.2984.0192.9595.5186.8394.49
Performance Comparison of the Baseline Model (M0) and the Proposed Fusion Models (M1–M4) M4 is the best-performing model as it produced the highest AUC when Messidor-2 was used to benchmark the performance of the state-of-the-art algorithms.,,– In Table 3, M4 with an AUC of 97.09% has similar results to those presented in previous works. M4 also achieved a comparable sensitivity in detecting referable DR of 93.68% and a specificity in detecting nonreferable DR of 91.52%.
Table 3.

Performance Comparison on Messidor-2 in Detecting Referable DR

Training DataTraining with Mydriatic/AccuracyAUCSensitivitySpecificity
BenchmarkYearSetNonmydriatic ImagingApproach(%)(%)(%)(%)
Abràmoff et al.2820161,250,000 images (EyeCheck project and the University of Iowa)UnknownCNN98.096.887.0
Gulshan et al.102016128,175 images (EyePACS)MixedInception-v3 CNN99.08798.5
Gargeya and Leng29,a201775,137 images (EyePACS)MixedCNN + gradient boosting classifier949387
Pires et al.30201935,126 images (part of Kaggle data set)MixedSimilar to VGG-1698.2
Voets et al.21201988,702 images (part of Kaggle data set)MixedInception-v3 CNN84.2185.3068.7088.50
Li et al.31,b201919,233 images (Chinese hospitals)UnknownInception-v3 CNN93.4999.0596.9393.45
Zago et al.32202028 images with 262,144 patches per image (DiaretDB1)MydriaticPatch-based CNN94.490.087.0
Proposed model M4202022,617 images (Taiwanese hospitals)MixedFusion CNN architecture91.9997.0993.6891.52

Any DR.

Only 800 images were selected from Messidor-2 to create the testing set.

Performance Comparison on Messidor-2 in Detecting Referable DR Any DR. Only 800 images were selected from Messidor-2 to create the testing set.

Discussion

A previous study achieved a weighted κ of 84% from a large training data set (1.6 million images). We provide good baseline (M0) results from training on a smaller data set (22,000 images) with a weighted κ of 80%. To reduce the gap of training sample size, two-stage early fusion architectures enhanced the performance of DR grading and achieved a similarly weighted κ of 84%, indicating that the lesion detection assistance was useful. The M4 hybrid model combines the lesion type classification and early fusion information, producing the best results in terms of sensitivity and specificity for detecting referral DR. Note that the incidence of soft exudates is relatively low in DR images compared to the other lesion types, and some of the small lesion features may disappear in the last convolutional layer. Thus, the lesion detection information is required to directly provide complementary information for the classifier. As can be seen in the upper panel of Figure 8, soft exudates were highlighted by the enhancement algorithm; a small hemorrhage was also highlighted in the lower panel. Both enhanced images assisted M4 to classify the image more correctly to referable DR than M0 (original prediction class is nonreferral). Furthermore, we used a public data set, Messidor-2, obtained from France, to validate the proposed model in practical use. M4 also achieved performance on Messidor-2 comparable to that of the benchmark algorithms without using the lesion information. In summary, these results validate that M4 performed equally well on both the private and public data sets in improving the overall performance of DR grading.
Figure 8.

(a) Raw images. (b) Enhanced images.

(a) Raw images. (b) Enhanced images. The proposed strategy mimics the evaluation process of ophthalmologists, in which the fundus image is inspected to identify suspicious entities (lesion types/locations) and then classified. This hybrid process combines candidate lesion features and whole-image deep learning features, which increases the overall performance for DR grading. Moreover, although background pigmentation varies across races and ethnicities and may hinder diagnosis, the DR signs are immutable., Our architectures were trained without using a transfer-learning model; they were trained based solely on the Asian fundus images and obtained a robust performance on both test data sets (AUC of 95.51% for Asian, 97.09% for Messidor-2). Hence, the proposed architectures can be combined with well-trained DR signs to become more highly applicable to different ethnicities. This result is similar to the findings in Li et al. Furthermore, both nonmydriatic and mydriatic images were used for training to demonstrate a generalized application of the proposed architectures. Instead of using a time-consuming patch-based method, early fusion efficiently decreased the inference time for lesion detection in supporting DR grading. Improving the misclassification rate in the early stage of DR is essential for clinical management and preventing patient vision loss in the future. A minor visual change between the mild and moderate severity stages assessed with a fundus photograph or optical coherence tomography, such as MA, intraretinal hemorrhages, or small hard drusen, may be overestimated or underestimated even by experienced ophthalmologists., For example, the pixels of MA are less than 0.002% of the image. Furthermore, the image artifacts are sometimes similar to MA. Consequently, the intergrader variability is well known, with a lower κ, which affects the performance of the CNN model as well. Accordingly, we developed the fusion architectures and combined the lesion information using the CNN model for DR grading. This may compensate for the information loss during the computation of the convolutional layers of the CNN model. A limitation of our study is that the fusion architectures excluded information on neovascularization, which is an important feature in the class of PDR. This feature was not trained because sparse data were marked as neovascularization. In addition, the performance improvement of the late fusion architecture was unclear. This finding was unexpected and suggests that there may have been overlapping features between the baseline and the lesion-classification models. Future work will include adjusting different weighting methods or modifying the losses from both the image enhancement classifier and the raw image classifier by using a controlled hyperparameter. Moreover, longitudinal image data make DR prediction more accurate and objective; this has some potential that should be explored further. In conclusion, we have developed fusion architectures that combine lesion information with disease severity classification. The M4 hybrid model performed well on Messidor-2 when compared with state-of-the-art algorithms without lesion detection information. Thus, we believe that M4 will assist frontline health care providers in efficiently highlighting lesion information and classifying DR severity and can be considered a representative model for general use.
  23 in total

1.  A data-driven approach to referable diabetic retinopathy detection.

Authors:  Ramon Pires; Sandra Avila; Jacques Wainer; Eduardo Valle; Michael D Abramoff; Anderson Rocha
Journal:  Artif Intell Med       Date:  2019-03-27       Impact factor: 5.326

2.  Focal Loss for Dense Object Detection.

Authors:  Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollar
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2018-07-23       Impact factor: 6.226

3.  Automated Identification of Diabetic Retinopathy Using Deep Learning.

Authors:  Rishab Gargeya; Theodore Leng
Journal:  Ophthalmology       Date:  2017-03-27       Impact factor: 12.079

4.  An Automated Grading System for Detection of Vision-Threatening Referable Diabetic Retinopathy on the Basis of Color Fundus Photographs.

Authors:  Zhixi Li; Stuart Keel; Chi Liu; Yifan He; Wei Meng; Jane Scheetz; Pei Ying Lee; Jonathan Shaw; Daniel Ting; Tien Yin Wong; Hugh Taylor; Robert Chang; Mingguang He
Journal:  Diabetes Care       Date:  2018-10-01       Impact factor: 19.112

5.  Grader Variability and the Importance of Reference Standards for Evaluating Machine Learning Models for Diabetic Retinopathy.

Authors:  Jonathan Krause; Varun Gulshan; Ehsan Rahimy; Peter Karth; Kasumi Widner; Greg S Corrado; Lily Peng; Dale R Webster
Journal:  Ophthalmology       Date:  2018-03-13       Impact factor: 12.079

Review 6.  Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales.

Authors:  C P Wilkinson; Frederick L Ferris; Ronald E Klein; Paul P Lee; Carl David Agardh; Matthew Davis; Diana Dills; Anselm Kampik; R Pararajasegaram; Juan T Verdaguer
Journal:  Ophthalmology       Date:  2003-09       Impact factor: 12.079

7.  Incidence and progression of diabetic retinopathy: a systematic review.

Authors:  Charumathi Sabanayagam; Riswana Banu; Miao Li Chee; Ryan Lee; Ya Xing Wang; Gavin Tan; Jost B Jonas; Ecosse L Lamoureux; Ching-Yu Cheng; Barbara E K Klein; Paul Mitchell; Ronald Klein; C M Gemmy Cheung; Tien Y Wong
Journal:  Lancet Diabetes Endocrinol       Date:  2018-07-11       Impact factor: 32.069

8.  Automatic Detection of Diabetic Retinopathy in Retinal Fundus Photographs Based on Deep Learning Algorithm.

Authors:  Feng Li; Zheng Liu; Hua Chen; Minshan Jiang; Xuedian Zhang; Zhizheng Wu
Journal:  Transl Vis Sci Technol       Date:  2019-11-12       Impact factor: 3.283

Review 9.  A review on automatic analysis techniques for color fundus photographs.

Authors:  Renátó Besenczi; János Tóth; András Hajdu
Journal:  Comput Struct Biotechnol J       Date:  2016-10-06       Impact factor: 7.271

10.  Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices.

Authors:  Michael D Abràmoff; Philip T Lavin; Michele Birch; Nilay Shah; James C Folk
Journal:  NPJ Digit Med       Date:  2018-08-28
View more
  1 in total

1.  Diabetic Macular Edema Detection Using End-to-End Deep Fusion Model and Anatomical Landmark Visualization on an Edge Computing Device.

Authors:  Ting-Yuan Wang; Yi-Hao Chen; Jiann-Torng Chen; Jung-Tzu Liu; Po-Yi Wu; Sung-Yen Chang; Ya-Wen Lee; Kuo-Chen Su; Ching-Long Chen
Journal:  Front Med (Lausanne)       Date:  2022-04-04
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.