| Literature DB >> 35328192 |
Siti Raihanah Abdani1, Mohd Asyraf Zulkifley2, Mohamad Ibrani Shahrimin1, Nuraisyah Hani Zulkifley3.
Abstract
Pterygium is an eye condition that causes the fibrovascular tissues to grow towards the corneal region. At the early stage, it is not a harmful condition, except for slight discomfort for the patients. However, it will start to affect the eyesight of the patient once the tissues encroach towards the corneal region, with a more serious impact if it has grown into the pupil region. Therefore, this condition needs to be identified as early as possible to halt its growth, with the use of simple eye drops and sunglasses. One of the associated risk factors for this condition is a low educational level, which explains the reason that the majority of the patients are not aware of this condition. Hence, it is important to develop an automated pterygium screening system based on simple imaging modalities such as a mobile phone camera so that it can be assessed by many people. During the early stage of automated pterygium screening system development, conventional machine learning techniques such as support vector machines and artificial neural networks are the de facto algorithms to detect the presence of pterygium tissues. However, with the arrival of the deep learning era, coupled with the availability of large training data, deep learning networks have replaced the conventional networks in screening for the pterygium condition. The deep learning networks have been successfully implemented for three major purposes, which are to classify an image regarding whether there is the presence of pterygium tissues or not, to localize the lesion tissues through object detection methodology, and to semantically segment the lesion tissues at the pixel level. This review paper summarizes the type, severity, risk factors, and existing state-of-the-art technology in automated pterygium screening systems. A few available datasets are also discussed in this paper for both classification and segmentation tasks. In conclusion, a computer-assisted pterygium screening system will benefit many people all over the world, especially in alerting them to the possibility of having this condition so that preventive actions can be advised at an early stage.Entities:
Keywords: classification; deep learning; eye disease screening; pterygium assessment; semantic segmentation
Year: 2022 PMID: 35328192 PMCID: PMC8947201 DOI: 10.3390/diagnostics12030639
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Medial canthus and lateral canthus of the eye.
Figure 2General information flow of this review paper.
Figure 3Samples of pterygium-infected tissues according to the severity level.
Summary of the risk factors of the pterygium condition.
| Study | Publication Year | Sample Size | Study Location | Risk Factors |
|---|---|---|---|---|
| West and Munoz [ | 2009 | 4774 | Arizona, USA | Low income, low educational status, and exposure to sunlight |
| Cajucom-Uy et al. [ | 2010 | 3282 | Singapore | Increasing age, male, outdoor occupation, and systemic factors |
| Zhong et al. [ | 2012 | 2133 | Dali, China | Increasing age, lack of formal education, and outdoor occupation |
| Jiao et al. [ | 2014 | 17,816 | Shangdong Province, China | Older age, outdoor time, educational level, and usage of sunglasses |
| Malefikar et al. [ | 2017 | 420 | Ilam Province, Iran | Family history of pterygium, cigarette smoking, history of baking, age, and severe blepharitis |
| Wang et al. [ | 2020 | 2651 | Inner Mongolia, China | Age, outdoor occupation, and time spent in rural areas |
| Fekadu et al. [ | 2020 | 400 | Gambella, Ethiopia | Male, outdoor occupation, and exposure to sunlight |
Figure 4Measurement distance between the limbus and the apex of abnormal tissues in the corneal region.
Summary of the Zaki et al. [19] pterygium dataset.
| Sources | No. of Samples | Resolution | Format | Iris Colors |
|---|---|---|---|---|
| Australian Pterygium | 30 | 4064 × 2704 | JPEG | Blue and Brown |
| Brazil Pterygium | 30 | 308 × 231 | JPEG | Blue and Brown |
| MILES | 30 | 1747 × 1180 | JPEG | Blue |
| UBIRIS | 30 | 200 × 150 | JPEG | Brown |
Figure 5Sample of images from Zaki et al. [19] dataset; sample no. 1 is from the Australian Pterygium dataset, sample no. 2 is from the Brazil Pterygium dataset, sample no. 3 is from the MILES dataset, and sample no. 4 is from the UBIRIS dataset.
Figure 6Samples of pterygium images and their corresponding ground truth semantic images. The first row of images are the original anterior segment photograph images and the second row of images are the corresponding ground truth label images.
Summary of automated pterygium screening systems using conventional approach.
| Study | Task | Sample Size | Strength | Weakness |
|---|---|---|---|---|
| Hilmi et al. [ | Severity grading | 93 pterygium images | Three-class problem; atrophic, intermediate, and fleshy | Relies only on redness information |
| Mesquita and Figueiredo [ | Tissue growth progress | 58 pterygium images | Good segmentation even if the iris and pterygium tissues look similar in color | Circular Hugh transform only works if the gaze is perpendicular to the camera |
| Gao et al. [ | Classification: pterygium and non-pterygium | 30 pterygium images and 854 non-pterygium images | Utilizes unique Fisher channel | Too many deterministic thresholds, which will not work when tested on different iris colors |
| Minami et al. [ | Tissue growth progress | 456 pterygium images | Fourier frequency analysis to represent the growth ring of pterygium tissues | Only six quantized levels to represent the tissue growth |
| Azemin et al. [ | Severity grading | 68 pterygium images | Utilizes compact ANN with five features as input | Relies heavily only color information without looking at pterygium tissue textures |
| Zaki et al. [ | Classification: Pterygium and normal | 60 pterygium images and 60 healthy eye images | Gradient-based lesion extraction, which is robust to various iris colors | Their dataset is skewed, whereby the healthy data were captured in a more standardized condition |
| Jais et al. [ | Severity grading | 93 pterygium images | Analyzes multiple conventional machine learning classifiers | No cross-validation, test dataset comprises only 9 images |
| Radzi et al. [ | Lesion segmentation | 120 pterygium images | Introduces pixel-based ratio between lesions and non-lesions to determine severity level | Smooth lesion boundary, which is not accurate for most cases |
Summary of automated pterygium screening systems using deep learning approach.
| Study | Task | Sample Size | Strength | Weakness |
|---|---|---|---|---|
| Lopez and Aquilera [ | Classifi-cation | 325 pterygium images and 2692 healthy eye images | Perform data augmentation to balance training dataset | A single convolutional layer only |
| Abdani et al. [ | Classifi-cation | 60 pterygium images and 60 healthy eye images | Analyze various regularization methods and implement transfer learning | Trained using low total number of data |
| Zheng et al. [ | Classifi-cation | 142 normal images, 144 observed pterygium images, and 150 surgery-required images | Lightweight deep model using MobileNet architectures | Training data are relatively low for training the MobileNet effectively |
| Fang et al. [ | Classifi-cation | Test data: 217 pterygium images and 6094 healthy eye images | Tested on both slit-lamp and hand-held images | Dataset severely imbalanced with small number of pterygium cases |
| Xu et al. [ | Classifi-cation | 189 pterygium images, 171 observed pterygium, and 110 surgery-required images | Implement state-of-the-art EfficientNet architecture | Lowest detection for observed pterygium class, even though tested on brown iris color dataset only |
| Pterygium-Net [ | Locali-zation | 60 pterygium images | Locate the region of pterygium lesions | Bounding box representation is not suitable for slender-shaped tissues |
| Abdani et al. [ | Segmen-tation | 328 pterygium images | Embed dense feed-forward layer to DeepLab architecture | Dense connection for DeepLab V2 only improves the performance slightly |
| Abdani et al. [ | Segmen-tation | 328 pterygium images | Embed group and shuffle unit with multi-scale parallel networks | Available dataset is relatively small for complex deep learning architecture |
| EyeHealer [ | Classifi-cation and Segmentation | 482 pterygium images | Compare with various eye disease | Low number of training data except for cataract and pterygium cases |