| Literature DB >> 35200743 |
Wan Mimi Diyana Wan Zaki1, Haliza Abdul Mutalib2, Laily Azyan Ramlan1, Aini Hussain1, Aouache Mustapha3.
Abstract
Advances in computing and AI technology have promoted the development of connected health systems, indirectly influencing approaches to cataract treatment. In addition, thanks to the development of methods for cataract detection and grading using different imaging modalities, ophthalmologists can make diagnoses with significant objectivity. This paper aims to review the development and limitations of published methods for cataract detection and grading using different imaging modalities. Over the years, the proposed methods have shown significant improvement and reasonable effort towards automated cataract detection and grading systems that utilise various imaging modalities, such as optical coherence tomography (OCT), fundus, and slit-lamp images. However, more robust and fully automated cataract detection and grading systems are still needed. In addition, imaging modalities such as fundus, slit-lamps, and OCT images require medical equipment that is expensive and not portable. Therefore, the use of digital images from a smartphone as the future of cataract screening tools could be a practical and helpful solution for ophthalmologists, especially in rural areas with limited healthcare facilities.Entities:
Keywords: artificial intelligence (AI); cataract; image processing; imaging modalities
Year: 2022 PMID: 35200743 PMCID: PMC8879609 DOI: 10.3390/jimaging8020041
Source DB: PubMed Journal: J Imaging ISSN: 2313-433X
Figure 1(a) Nuclear Cataract, (b) Cortical Cataract, (c) Posterior Capsular Cataract [9].
Figure 2LOCS III Grading Standard [23].
Summary of Previous Methods Using Machine Learning and Image Processing.
| Authors | Methods | Image Modality | Achievement | Limitation | Database |
|---|---|---|---|---|---|
| Yang et al. [ | Automatic cataract classification with an improved version of top-bottom hat transformation as part of their pre-processing and 2-layer backpropagation (BP) neural network as classifier | Fundus | Achieve true positive rate of 82.1% (training) and 82.9% (test) | Pre-processing takes longer for a single image | Beijing |
| Behera et al. [ | Nuclear cataract detection based on image processing and machine learning | Fundus | Achieve overall accuracy of 95.2% | Focused only on nuclear cataract | Kaggle and GitHub repository (800 fundus images) |
| Song et al. [ | Proposed an improved semi-supervised learning method to acquire some additional information from unlabelled cataract fundus images to improve the accuracy of the basic model to train only the marker images | Fundus | Achieve accuracy of 88.6% using SVM model | Semiautomated method | 7851 fundus images |
| H. Li et al. [ | The anatomical structure of the lens images is detected using a modified active shape model (ASM) where the local features are extracted according to the clinical grading protocol and utilises a support vector machine regression for the grade prediction | Slit-lamp | Achieve a 95% success rate for structure detection and an average grading difference of 0.36 on a 5.0 scale | User intervention was provided for the images with inaccurate focus, small pupil, or dropping eyelid | Singapore Malay eye study (SiMES) (5850 slit-lamp images) |
| Huang et al. [ | Novel computer-aided diagnosis method by ranking to facilitate nuclear cataract grading that followed conventional clinical decision-making process. | Slit-lamp | Achieve a 95% grading accuracy compared to other methods “grading via classification” (76.8%) and “grading via regression” (87.3%) | Focused only on nuclear cataract | Singapore Malay Eye Study (SiMES) (1000 slit-lamp images) |
| Amol B. Jagadale & Jadhav [ | Simpler automatic systems for nuclear cataract classification from the development of pupil detection region algorithm using region properties | Slit-lamp | Proposed best features from pupil detection method using circular Hough Transform (CHT) | Need human intervention | Cottage Hospital, Pandharpur and Lions eye Hospital, Miraj |
| A.B. Jagadale et al. [ | Proposed an early detection of nuclear cataract | Slit-lamp | Achieved 90.25% accuracy in detecting nuclear cataract | Need human intervention | Government hospital Pandharpur (2650 slit-lamp images) |
Summary of Cataract Detection Using Deep Learning Approaches.
| Authors | Methods | Image Modalities | Achievement | Limitation | Database |
|---|---|---|---|---|---|
| Zhang et al. [ | Visualize some of the feature maps at pool5 layer with their high-order empirical semantic meaning that provides an explanation to the feature representation extracted by deep convolutional neural network (DCNN) | Fundus | Achieve accuracy of 93.52% (detection) and 86.69% (grading) | Accuracy can be increased by increasing the amount of data, therefore, big data is needed | Beijing Tongren Eye Center of Beijing Tongren |
| Zhou, Li, and Li [ | Deep neural network with discrete state transition (DST) | Fundus | Achieve 78.57% for cataract grading (with prior knowledge) | Lower accuracy compared to previous DST-ResNet for cataract grading (without prior knowledge) | Beijing Tongren hospital (1355 fundus images) |
| Mahmud Khan et al. [ | Cataract detection using the CNN with VGG-19 model | Fundus | Achieve high accuracy of 97.47% | Use unfiltered and quality unassessed fundus images | Shanggong Medical Technology Co., Ltd. (800 fundus images) |
| Xiong et al. [ | Grade cataracts using a pre-trained residual network (ResNet) which is adapted from residual learning framework [ | Fundus | Achieve 91.5% accuracy for 6 class classification | Good results in classifications 0 and 5 but does not effectively distinguish between 2 and the adjacent classifications | 1352 fundus images |
| Li et al. [ | Restructured AlexNet and GoogleNet into AlexNet-CAM and GoogleNet-CAM, respectively and use Grad-CAM which is an improved technology on basis of Class Activation Mapping (CAM) | Fundus | Achieve accuracy of 93.28% (AlexNet-CAM) and 94.93% (GoogLeNet-CAM) | Automated method | Beijing Tongren Eye |
| Imran et al. [ | Hybrid model that integrates deep learning model and SVM for 4-class cataract classification | Fundus | Achieve 95.65% accuracy | Limited fundus images for moderate and severe cataract categories | Tongren Hospital, China (8030 fundus images) |
| Imran et al. [ | Hybrid convolutional and recurrent neural network (CRNN) for the cataract classification | Fundus | Achieve accuracy of 97.39% for 4-class cataract classification | Limited fundus images for moderate and severe cataract categories | Tongren Hospital, China (8030 fundus images) |
| Gao, Lin, & Wong [ | Automatically learn features for grading the severity of nuclear cataracts from slit-lamp images using unsupervised convolutional-recursive neural networks (CRNN) method | Slit-lamp | Achieve 70.7% exact agreement ratio against clinical integral grading, 88.4% decimal grading error ≤ 0.5, 99.0% integral grading error ≤ 1.0 and MAE of 0.304 | The results might be affected by the error in the human-labelled ground truth | ACHIKO-NC Dataset (5378 images) |
| Qian, Patton, Swaney, Xing, & Zeng [ | Utilise supervised training of convolutional neural network to classify different areas of cataracts in lens | Slit-lamp | Achieve validation accuracy of 96.1% | Need human intervention | No. 2 Hospital, Changshu, Jiangsu, China (420 slit-lamp images) |
| Zhang et al. [ | Nuclear cataract classification based on the anterior segment OCT images using Convolutional Neural Network (CNN) model named GraNet | OCT | Achieve accuracy of less than 60% for all CNN models | Imbalanced dataset | Dataset acquired by CASIA2 device of Tomey Corporation, Japan (38,225 OCT images) |
Available Tools for Cataract Grading.
| Authors | Methods and Tools | Achievement | Limitation | Database |
|---|---|---|---|---|
| Kim et al. [ | Evaluated correlation of LOCS III lens grading with nuclear lens density and whole lens density using AS-OCT with liquid optics interface | Nuclear density showed a higher positive correlation with LOCS III compared to the whole density | Need human intervention | Asan Medical |
| Panthier et al. [ | Cataract grading method based on average lens density quantification with SS-OCT scans | Achieve d96.2% (sensitivity) and 91.3% (specificity) | A single-centre study that delineated the anterior and posterior cortex | Rothschild Foundation, Paris, France |
| Chen et al. [ | Evaluated the correlation of lens nuclear opacity quantitation by long-range SS-OCT method with LOCS III and Scheimpflug imaging-based grading system | Obtained a good correlation between SS-OCT nuclear density and LOCS III and Pentacam nuclear density | Semiautomatic and time-consuming | Uses 120 images |
Figure 3Unified framework for automated nuclear cataract severity classification [66].
Figure 4Framework for a connected cataract screening system using smartphone.