| Literature DB >> 26035836 |
Ramon Pires1, Tiago Carvalho1, Geoffrey Spurling2, Siome Goldenstein1, Jacques Wainer1, Alan Luckie3, Herbert F Jelinek4, Anderson Rocha1.
Abstract
Diabetic Retinopathy (DR) is a complication of diabetes mellitus that affects more than one-quarter of the population with diabetes, and can lead to blindness if not discovered in time. An automated screening enables the identification of patients who need further medical attention. This study aimed to classify retinal images of Aboriginal and Torres Strait Islander peoples utilizing an automated computer-based multi-lesion eye screening program for diabetic retinopathy. The multi-lesion classifier was trained on 1,014 images from the São Paulo Eye Hospital and tested on retinal images containing no DR-related lesion, single lesions, or multiple types of lesions from the Inala Aboriginal and Torres Strait Islander health care centre. The automated multi-lesion classifier has the potential to enhance the efficiency of clinical practice delivering diabetic retinopathy screening. Our program does not necessitate image samples for training from any specific ethnic group or population being assessed and is independent of image pre- or post-processing to identify retinal lesions. In this Aboriginal and Torres Strait Islander population, the program achieved 100% sensitivity and 88.9% specificity in identifying bright lesions, while detection of red lesions achieved a sensitivity of 67% and specificity of 95%. When both bright and red lesions were present, 100% sensitivity with 88.9% specificity was obtained. All results obtained with this automated screening program meet WHO standards for diabetic retinopathy screening.Entities:
Mesh:
Year: 2015 PMID: 26035836 PMCID: PMC4452786 DOI: 10.1371/journal.pone.0127664
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Retinal image.
Regions of Interest (dashed line boundaries) and PoI (solid, circular boundaries).
Fig 2Pipeline of the BoVW-based automated diabetic retinopathy classification system for abnormal retinal images identification.
For Low-Level feature extraction, the proposed method identifies Points of Interest (points in high-contrast or context-changing areas) within regions of interest which contain specific lesions marked and reviewed by specialists during the training of the method. For Codebook Learning (vocabulary creation), the method uses the k-means clustering method with Euclidean distance over a sample of the points of interest. The centroids are used as representatives codewords (the most important points of interests, for instance). With the codebook and the low-level description of a set of training images for a specific DR-related lesion, the Mid-Level feature extraction step employs the classical coding/pooling combination (hard/sum), that consists of defining a histogram that reveals the number of activations for each visual word in each analyzed image. For the Decision model training, the current method requires the training of one decision model for normal vs. bright lesions, normal vs. red lesions, or normal vs. multi-lesion classification.
Fig 3ROC curves.
Training and testing images both from DR1 dataset.
Fig 4ROC curves.
Red lesion, bright lesion, and multi-lesion detection.