| Literature DB >> 33644402 |
Zakir Khan Khan1, Arif Iqbal Umar1, Syed Hamad Shirazi1, Asad Rasheed1, Abdul Qadir1, Sarah Gul2.
Abstract
OBJECTIVE: Meibomian gland dysfunction (MGD) is a primary cause of dry eye disease. Analysis of MGD, its severity, shapes and variation in the acini of the meibomian glands (MGs) is receiving much attention in ophthalmology clinics. Existing methods for diagnosing, detection and analysing meibomianitis are not capable to quantify the irregularities to IR (infrared) images of MG area such as light reflection, interglands and intraglands boundaries, the improper focus of the light and positioning, and eyelid eversion. METHODS AND ANALYSIS: We proposed a model that is based on adversarial learning that is, conditional generative adversarial network that can overcome these blatant challenges. The generator of the model learns the mapping from the IR images of the MG to a confidence map specifying the probabilities of being a pixel of MG. The discriminative part of the model is responsible to penalise the mismatch between the IR images of the MG and confidence map. Furthermore, the adversarial learning assists the generator to produce a qualitative confidence map which is transformed into binary images with the help of fixed thresholding to fulfil the segmentation of MG. We identified MGs and interglands boundaries from IR images.Entities:
Keywords: imaging; iris; retina; vision
Year: 2021 PMID: 33644402 PMCID: PMC7883862 DOI: 10.1136/bmjophth-2020-000436
Source DB: PubMed Journal: BMJ Open Ophthalmol ISSN: 2397-3269
Figure 1Shows original and processed IR images of inner eyelids. IR, infrared.
Shows most of the patients (72%) with meibomian gland dysfunction were between 40 and 85 years of age, 2% of the patients were between 1% and 20% years and 45% between 21 and 40 years
| Age | 1–20 years | 21–40 years | 41–60 years | 61–85 years |
| Male | 01 | 12 | 21 | 17 |
| Female | 01 | 15 | 26 | 19 |
52% were women and 48% men of the total 112 patients.
Figure 2Architecture of proposed conditional generative adversarial network for meibomian gland dysfunction analysis.
Provides comparison of the evaluation metric
| ANN | AJI | aHD | F1 score |
| FCN | 0.494 | 8.132 | 0.701 |
| U-net | 0.588 | 6.243 | 0.722 |
| GAN | 0.600 | 5.719 | 0.782 |
| Mask R-CNN | 0.601 | 5.721 | 0.801 |
| Proposed | 0.664 | 4.611 | 0.825 |
aHD, average Pompeiu-Hausdorff distance; AJI, Aggregated Jaccard Index; ANN, Artificial Neural Network; FCN, Fully Convolutional Network; GAN, generative adversarial network; RCNN, Region Based Convolutional Neural Networks.
Figure 3Some meibographic images taken from both upper and lower eyelids and analysed with the four automatic and manual detection methods. The green region for manual analysis shows glands region and red represent loss area, while in case of automatic analysis methods white region represents gland area and colour regions represent loss area. Results revealed that the automatic detection method percentage results are almost on par with the manual analysis. In manual analyses, the analyser while putting dots or lines around the glands is more likely to skip some minor regions between the glands while the automatic analysis considers this region in the analysis. The cause of this difference is due to scare tissues and light reflection on the images and system classify them as meibomian glands area. We can see from figure 3 that MG-GAN outperformed state of the art detection methods for MG detection. CGAN, conditional adversarial neural network; GAN, generative adversarialnetwork; MG-GAN, meibomian gland-generativeadversarial network.
Distribution of grades
| Grades | Auto (clinician I) (%) | Auto (clinician II) (%) | Manual (%) |
| I | 25 | 35 | 45 |
| II | 65 | 55 | 45 |
| III | 10 | 10 | 10 |
| IV | 0 | 0 | 0 |
Provides manual and automatic analysis of MG using paired-sample t-test
| Method | Mean loss area (%) | Mean time |
| Manual analysis | 28.55±12.75 | 15±3.4 min |
| MG-GAN analysis | 30.1±12.64 | Less than a minute |
| Mask R-CNN | 30.6±12.33 | Less than a minute |
| GAN | 30.8±12.21 | Less than a minute |
| U-net analysis | 30.9±12.13 | Less than a minute |
| FCN analysis | 31.6±11.90 | Less than a minute |
| Adaptive thresholding | 33.91±10.50 | Less than a minute |
FCN, Fully Convolutional Network; MG-GAN, meibomian gland-generative adversarial network; RCNN, Region Based Convolutional Neural Networks.
Figure 4(A) Manual versus meibomian gland-generative adversarial network (MG-GAN) (clinician I) (B) Manual versus MG-GAN (clinician II) (C) clinician I versus clinician I.
Demonstrate the Bland-Altman analysis value of k and relative strength of agreement is interpreted as <0.20: poor, 0.21–0.40: fair, 0.41–0.60: moderate, 0.61–0.80: good, >0.81 very good
| Method | k | P value | Agreement |
| Manual vs MG -GAN (clinician I) | 0.7081 | 0.0019 | Good |
| Manual vs MG -GAN (clinician II) | 0.5521 | 0.0199 | Moderate |
| Clinician I vs clinician II | 0.8549 | <0.001 | Good |
| Two MG-GAN measurement of clinician I | 0.8665 | <0.001 | Very good |
Figure 5Limit of agreement plot showing the consistency between manual and meibomian gland-generative adversarial network analysis. The average is plotted along x-axis and mean difference along y-axis (A) manual versus clinician I, (B) manual versus clinician II and (C) clinician I versus clinician II.