| Literature DB >> 25914641 |
Amir-Homayoun Javadi1, Zahra Hakimi2, Morteza Barati2, Vincent Walsh3, Lili Tcheang3.
Abstract
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as "SET") that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations ("Natural"); and images of less challenging indoor scenes ("CASIA-Iris-Thousand"). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library ("DLL"), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk).Entities:
Keywords: dark pupil; ellipse fitting; eye tracking; head mounted device; pupil detection
Year: 2015 PMID: 25914641 PMCID: PMC4391030 DOI: 10.3389/fneng.2015.00004
Source DB: PubMed Journal: Front Neuroeng ISSN: 1662-6443
Figure 1Sample images of (A) our own (referred to as “Natural”) and (B) CASIA-Iris collection.
Figure 2Processing steps to extract the pupil center point. (A) the original grayscale image; (B) the black and white thresholded image; (C) the segmented image in which each segment is displayed in a different color; (D) a highlighted segment (red) with its border extracted using Convex Hull method (green border), which actually covers some part of the pupil; The angle of each point on the border is calculated using the yellow lines and the horizontal white line that meet at the center of the segment (E) the decomposition of the shown border into its sinusoidal components with each point being an edge on the green border shown in (D); and (F) the extracted pupil (blue ellipse) and the estimated pupil center point (cyan cross).
Figure 3Calibration cross (on the left) used for calibrating the eye and scene cameras to map pupil center point (PCP) to point of regard (PoR).
Threshold (pixels) used for classification of frames into hit and miss using exponential decay criterion and detection rate (%) for different methods and image collections.
| SET | 3.57 | 85.23 | 5.92 | 83.41 |
| Starburst | 8.84 | 79.15 | 9.95 | 93.68 |
| Gaze-Tracker | 15.17 | 28.48 | 7.41 | 32.75 |
Summary of comparisons of detection errors (two-independent-samples Mann-Whitney .
| SET and Starburst | <0.001 | 0.22 | <0.001 | 0.64 |
| SET and Gaze-Tracker | <0.001 | 0.37 | <0.001 | 0.69 |
| Starburst and Gaze-Tracker | =0.21 | 0.09 | =0.14 | 0.11 |
Figure 4Detection error for different methods and image collections after exclusion of missed frames. Error bars reflect one standard deviation. *p < 0.001.
Figure 5The cumulative distribution of detection error for different methods for (A) Natural and (B) CASIA-Iris image collections. y-axis shows the percentage of frames with detection error smaller or equal to a certain value in the x-axis.
Percentage of exclusion of frames based on excessive duration of processing time.
| SET | 1.77 | 4.88 |
| Starburst | 4.46 | 1.42 |
| Gaze-Tracker | 0.51 | 1.79 |
Summary of comparisons of detection times (two-independent-samples Mann-Whitney .
| SET and Starburst (MATLAB) | <0.001 | 0.99 | <0.001 | 0.83 |
| SET and Gaze-Tracker (C#) | <0.001 | 0.99 | <0.001 | 0.99 |
Figure 6Detection time for different methods and image collections. (A) Comparison between SET and Starburst. For this comparison both SET and Starburst algorithms are run in MATLAB. (B) Comparison between SET and Gaze-Tracker. For this comparison both SET and Gaze-Tracker are run in C#. Error bars reflect one standard deviation. *p < 0.001.