Literature DB >> 35650384

Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data.

Oliver Deane1, Eszter Toth2, Sang-Hoon Yeo3.   

Abstract

With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user's gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm's output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system's practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias.
© 2022. The Author(s).

Entities:  

Keywords:  Deep learning; Gaze tracking; Masked region-based convolutional neural network; Object detection; Portable eye-tracker

Year:  2022        PMID: 35650384     DOI: 10.3758/s13428-022-01833-4

Source DB:  PubMed          Journal:  Behav Res Methods        ISSN: 1554-351X


  16 in total

1.  Flanker effects with faces may depend on perceptual as well as emotional differences.

Authors:  Gernot Horstmann; Kirsten Borgstedt; Manfred Heumann
Journal:  Emotion       Date:  2006-02

2.  The Psychophysics Toolbox.

Authors:  D H Brainard
Journal:  Spat Vis       Date:  1997

3.  Social anxiety predicts amygdala activation in adolescents viewing fearful faces.

Authors:  William D S Killgore; Deborah A Yurgelun-Todd
Journal:  Neuroreport       Date:  2005-10-17       Impact factor: 1.837

4.  Finding the face in the crowd: an anger superiority effect.

Authors:  C H Hansen; R D Hansen
Journal:  J Pers Soc Psychol       Date:  1988-06

Review 5.  ilastik: interactive machine learning for (bio)image analysis.

Authors:  Stuart Berg; Dominik Kutra; Thorben Kroeger; Christoph N Straehle; Bernhard X Kausler; Carsten Haubold; Martin Schiegg; Janez Ales; Thorsten Beier; Markus Rudy; Kemal Eren; Jaime I Cervantes; Buote Xu; Fynn Beuttenmueller; Adrian Wolny; Chong Zhang; Ullrich Koethe; Fred A Hamprecht; Anna Kreshuk
Journal:  Nat Methods       Date:  2019-09-30       Impact factor: 28.547

6.  The where, what and when of gaze allocation in the lab and the natural environment.

Authors:  Tom Foulsham; Esther Walker; Alan Kingstone
Journal:  Vision Res       Date:  2011-07-23       Impact factor: 1.886

7.  What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition.

Authors:  Tom Foulsham; Geoffrey Underwood
Journal:  J Vis       Date:  2008-02-20       Impact factor: 2.240

8.  MCIndoor20000: A fully-labeled image dataset to advance indoor objects detection.

Authors:  Fereshteh S Bashiri; Eric LaRose; Peggy Peissig; Ahmad P Tafti
Journal:  Data Brief       Date:  2018-01-03

9.  Personal space regulation by the human amygdala.

Authors:  Daniel P Kennedy; Jan Gläscher; J Michael Tyszka; Ralph Adolphs
Journal:  Nat Neurosci       Date:  2009-08-30       Impact factor: 24.884

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.