| Literature DB >> 35177973 |
Jia Zheng Lim1, James Mountstephens2, Jason Teo2.
Abstract
CONTEXT: Eye tracking is a technology to measure and determine the eye movements and eye positions of an individual. The eye data can be collected and recorded using an eye tracker. Eye-tracking data offer unprecedented insights into human actions and environments, digitizing how people communicate with computers, and providing novel opportunities to conduct passive biometric-based classification such as emotion prediction. The objective of this article is to review what specific machine learning features can be obtained from eye-tracking data for the classification task.Entities:
Keywords: biometric machine learning; classification; eye-tracking; feature extraction; fixation
Year: 2022 PMID: 35177973 PMCID: PMC8843826 DOI: 10.3389/fnbot.2021.796895
Source DB: PubMed Journal: Front Neurorobot ISSN: 1662-5218 Impact factor: 2.650
Summary of studies using eye features.
|
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|---|
| Cao et al. ( | 2016 | Intention recognition | To examine and evaluate whether pupil variation has a relevant impact on the endoscopic manipulator activation judgment | Pupil size, velocity of eye rotation | 12 (10 males, 2 females) | Tobii 1750 | SVM and PNN | 88.6% |
| Ahmed and Noble ( | 2016 | Image classification | Attempt to classify and acquiring the image frames of the head, abdominal, and femoral from 2-D B-Mode ultrasound scanning | Fixations | 10 | EyeTribe (30Hz) | Bag of words model | 85–89% |
| Zhang and Juhola ( | 2017 | Biometric identification | To study primarily biometric recognition as a multi-class classification process and biometric authentication as binary classification | Saccades | 109 | EyeLink (SR Research) | SVM, LDA, RBF, MLP | 80–90% |
| Zhou et al. ( | 2017 | Image classification | To propose an approach of two-stage feature selection for image classification by considering human factors and leveraging the importance of the eye-tracking data. | Fixations, ROI | - | Tobii X120 | SVM | 94.21% |
| Borys et al. ( | 2017 | User performance classification in RFFT | To verify and evaluate whether eye-tracking data in combination with machine learning could be used to identify user output in RFFT. | Fixations, saccades, blinks, pupil size | 61 | Tobii Pro TX300 | Quadratic discriminant analysis | 78.7% |
| Karessli et al. ( | 2017 | Image classification | To propose an approach that uses gaze data for zero-shot image classification | Gaze point | 5 | Tobii TX300 (300Hz) | SVM | 78.2% |
| Labibah et al. ( | 2018 | Lie detection | To construct the object using a lie detector with the analysis of pupil changes and eye movements using image processing and decision tree algorithm. | Pupil diameter, eye movements | 40 | Computer camera | Decision tree | 95% |
| Qi et al. ( | 2018 | Material classification | To investigate how humans interpret material images and find information on eye fixation enhances the efficiency of material recognition. | Fixation points, gaze paths | 8 | Eye-tracker | CNN | 85.9% |
| Singh et al. ( | 2018 | Reading pattern classification | To analyze the reading patterns of eye-tracking inspectors and assesses their ability to detect specific types of faults. | Fixations, saccades | 39 | EyeLink 1000 | NB, MNB, RF, SGD, ensemble, decision trees, Lazy network | 79.3–94% |
| Lagodzinski et al. ( | 2018 | Cognitive activity recognition | To discuss the concept of the eye movement study, which can be used effectively in behavior detection due to the good connection with cognitive activities. | EOG, accelerometer data | 100 | JINS MEME EOG-based eye-tracker | SVM | 99.3% |
| Bozkir et al. ( | 2019 | Cognitive load classification | To propose a scheme for the detection of cognitive driver loads in safety-critical circumstances using eye data in VR. | Pupil diameter | 16 | Pupil Labs | SVM, KNN, RF, decision trees | 80% |
| Orlosky et al. ( | 2019 | User understanding recognition | To recognize the understanding of the vocabulary of a user in AR/VR learning interfaces using eye-tracking. | Pupil size | 16 | Pupil Labs Dev IR camera | SVM | 62–75% |
| Sargezeh et al. ( | 2019 | Gender classification | To examine parameters of eye movement to explore gender eye patterns difference while viewing the indoor image and classify them into two subgroups. | Saccade amplitude, number of saccades, fixation duration, spatial density, scan path, RFDSD | 45 (25 males, 20 females) | EyeLink 1000 plus | SVM | 84.4% |
| Tamuly et al. ( | 2019 | Image classification | To develop a system for classifying images into three categories from extracted eye features. | Fixation count, fixation duration average, fixation frequency, saccade count, saccade frequency, saccade duration total, saccade velocity total | 25 | SMI eye-tracker | KNN, NB, decision trees | 57.6% |
| Luo et al. ( | 2019 | Object detection | To develop a framework for extracting high-level eye features from low-cost remote eye-tracker's outputs with which the object can be detected. | Fixation length, radius of fixation, number of time-adjacent clusters | 15 (6 males, 9 females) | Tobii Eye Tracker 4C | SVM | 97.85% |
| Startsev and Dorr ( | 2019 | ASD classification | To propose a framework that identifies an individual's viewing activity as likely to be correlated with either ASD or normal development in a fully automated fashion, based on scan path and analytically expected salience. | Fixations, scan path | 14 | Tobii T120 | RF | 76.9% AUC |
| Zhu et al. ( | 2019 | Depression recognition | To propose a depression detection using CBEM and compare the accuracy with the traditional classifier. | Fixation, saccade, pupil size, dwell time | 36 | EyeLink 1000 | CBEM | 82.5% |
| Vidyapu et al. ( | 2019 | Attention prediction | To present an approach for user attention prediction on webpage images. | Fixations | 42 (21 males, 21 females) | Computer webcam | SVM | 67.49% |
| Kacur et al. ( | 2019 | Schizophrenia disorder detection | To present a method to detect schizophrenia disorder using the Rorschach Inkblot Test and eye-tracking. | Gaze position | 44 | Tobii X2-60 | KNN | 62% - 75% |
| Yoo et al. ( | 2019 | Gaze-writing classification | To propose a gaze-writing entry method to identify numeric gaze-writing as a hands-free environment. | Gaze position | 10 | Tobii Pro X2-30 | CNN | 99.21% |
| Roy et al. ( | 2020 | Image identification | To develop a cognitive model for ambiguous image identification. | Eye fixations, fixation duration, pupil diameter, polar moments, moments of inertia | 24 (all males) | Tobii Pro X2-30 | LDA, QDA, SVM, KNN, decision trees, bagged tree | ~90% |
| Guo et al. ( | 2021 | Workload estimation | To investigate the usage of eye-tracking technology for workload estimation and performance evaluation in space teleoperation | Eye fixation, eye saccade, blink, gaze, and pupillary response | 10 (8 males, 2 females) | Pupil Labs Core | LOSO protocol, SVM (RBF) | 49.32% |
| Saab et al. ( | 2021 | Image classification | To propose an observational supervision approach for medical image classification using gaze features and deep learning | Gaze data | - | Tobii Pro Nano | CNN | 84.5% |
Figure 1Selection process with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) search strategy.
Summary of research using eye features with the combination of other signals.
|
|
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|---|---|
| Slanzi et al. ( | 2017 | Web users click intention prediction | To propose a behavioral analysis to evaluate the click intention of web users as a mechanism for analyzing web user activities on a website. | Pupil size, gaze positions | EEG | 21 (10 males, 11 females) | Sofey eye-tracking system (30Hz) | Logistic Regression | 71.09% |
| Shi et al. ( | 2017 | Emotion recognition | To implement an assessment method for the automated classification of high- and low-quality data using spatial-temporal scan path analysis. | Fixations, scan path | EEG | 26 (15 males, 11 females) | SMI eye tracking glasses | Linear SVM | 81.7% |
| Czyzewski et al. ( | 2017 | Real and imaginary motion of limbs classification | To propose an experimental multimodal device with serious brain injuries for the polysensory treatment and stimulation of non-communicative subjects. | Gaze fixation points | EEG | 10 (9 males, 1 female) | EyeX Controller | SVM, ANN, Rough sets | 91% |
| Wilbertz et al. ( | 2018 | Decoding of bistable plaid motion perception | To optimize perceptual alternations decoding using the combination of eye and brain signals. | Eye positions | fMRI | 20 (8 males, 12 females) | iView XTM MRI (50Hz) | SVM | 91% |
| Guo et al. ( | 2019 | Emotion recognition | To integrate eye image modality into multimodal emotion detection with the combinations of eye movements and EEG. | Pupil diameter, blink | EEG, EIG | 16 (6 males, 10 females) | SMI ETG glasses | SVM | 79.63% |
| Jiang et al. ( | 2019 | ASD classification | To investigate atypical visual performance in ASD patients through facial emotion and eye-tracking data. | Eye fixations | Face features | 58 | Tobii Pro TX300, Tobii X2-60 | RF | 86% |
| Thapaliya et al. ( | 2019 | ASD classification | To analyze and evaluate the EEG and eye data for the diagnosis of ASD using a machine learning algorithm. | Fixation times | EEG | 52 | Tobii X50 | SVM, DNN, NB, logistic regression | 71–100% |
| Ding et al. ( | 2019 | MDD classification | To present an approach involving eye-tracking data, EEG, and GSR to identify patients with depression and balanced controls. | Number of fixations, mean glance duration | EEG, GSR | 348 | Tobii Eye Tracker 4C | SVM, RF, logistic regression | 79.63% |
| Abdelrahman et al. ( | 2019 | Attention classification | To propose a new approach incorporating eye-tracking and thermal imaging to identify attention types. | Fixation duration | Thermal imaging | 22 (14 males, 8 females) | Tobii EyeX | SVM, KNN, logistic regression | 75–87% |
| Lin et al. ( | 2019 | Mental spelling classification | To develop a high-speed mental spelling system using eye-tracking and EEG signals. | Gaze position | EEG | 5 | Tobii Eye Tracker 4C | FBCCA | 92.1% |
| Horng and Lin ( | 2020 | Drowsiness prediction and classification | To design an experiment on physiological cognitive state prediction using multimodal bio-signals. | Gaze point | GSR, brainwave signals, heart rate | 10 | Tobii Eye Tracker 4C | ANN, SVM | 89.1% |
| Kubacki ( | 2021 | Element sorting | To propose a BCI system for element sorting using SSVEP, EOG, eye-tracking, and force feedback | EOG, eye positions | SSVEP | 3 | Camera with eyelike library | BCI system | 90% |
| Song et al. ( | 2021 | Vigilance estimation | To propose a DCRA using the combination of EEG and EOG for vigilance estimation | EOG | EEG | 23 (11 males, 12 females) | Neuroscan system, eye-tracking glasses | RNN | 80–85% |
| Ha et al. ( | 2021 | Meal-assist detection | To propose a BCI system for meal-assist using triple eye blinking, EEG, and EMG | Eye blink | EEG, EMG | 5 males | Computer camera | BCI system | 94.67% |