| Literature DB >> 35372579 |
Yena Christina Kang1, Hee Kyung Yang2, Young Jae Kim1, Jeong-Min Hwang2, Kwang Gi Kim1.
Abstract
This study presents an automated algorithm that measures ocular deviation quantitatively using photographs of the nine cardinal points of gaze by means of deep learning (DL) and image processing techniques. Photographs were collected from patients with strabismus. The images were used as inputs for the DL segmentation models that segmented the sclerae and limbi. Subsequently, the images were registered for the mathematical algorithm. Two-dimensional sclera and limbus were modeled, and the corneal light reflex points of the primary gaze images were determined. Limbus recognition was performed to measure the pixel-wise distance between the corneal reflex point and limbus center. The segmentation models exhibited high performance, with 96.88% dice similarity coefficient (DSC) for the sclera segmentation and 95.71% DSC for the limbus segmentation. The mathematical algorithm was tested on two cranial nerve palsy patients to evaluate its ability to measure and compare ocular deviation in different directions. These results were consistent with the symptoms of such disorders. This algorithm successfully measured the distance of ocular deviation in patients with strabismus. With complementation in the dimension calculations, we expect that this algorithm can be used further in clinical settings to diagnose and measure strabismus at a low cost.Entities:
Mesh:
Year: 2022 PMID: 35372579 PMCID: PMC8970860 DOI: 10.1155/2022/9840494
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.411
Figure 1Nine cardinal gaze points.
Figure 2Architecture of U-Net model.
Figure 3Summary of the overall process of the algorithm. The sample images that are put into the segmentation model: (a) example of ground truth images of limbus, (b) example of raw data, and (c) example of ground truth images of sclera. The registration process: (d) example of center image with reference line (red line) drawn and (e) example of rotating image with reference line (red line) drawn. The center (green dot), length, and angle of the reference line are adjusted to be the same as those of (e) for registration to take place. (f) The overlaid image of (d) and (e), showing that both eyes are positioned on the same location. (g) Example of recognized limbi (red circle) and the corneal reflex points (intersect of the parallel and vertical lines). (h) Example of recognized limbus (red circle) and measured distance between the limbus center and the corneal reflex point detected in (g).
Results of segmentation models.
| Accuracy (%) | Sensitivity (%) | Specificity (%) | DSC (%) | |
|---|---|---|---|---|
| Sclera segmentation | 99.84 | 97.47 | 99.90 | 96.88 |
| Limbus segmentation | 99.92 | 95.63 | 99.96 | 95.71 |
Results of algorithm testing images. Patient 1 represents a patient with fourth cranial nerve palsy, in which both eyes exhibit disability with downward and inward movement. Patient 2 represents a patient with sixth cranial nerve palsy, in which the left eye cannot move outwards.
| Patient 1 | Patient 2 | ||||||
|---|---|---|---|---|---|---|---|
| Downward, outward (pixels) | Downward, inward (pixels) | Percentage (%) | Left eye (pixels) | Right eye (pixels) | Percentage (%) | ||
| Left eye | 227 | 208 | 91.6 | Outwards | 45 | 163 | 27.6 |
| Right eye | 218 | 195 | 89.4 | Inwards | 109 | 105 | 103.8 |
Figure 4(a) Algorithm result for patient 1. The pixel-wise distances that are measured indicate that the patient has difficulties moving eyes inwards. (b) Algorithm result for patient 2. The pixel-wise distances that are measured indicate that the patient has difficulties moving left eye outwards.