| Literature DB >> 33828716 |
Julian Wolf1, Stephan Hess1, David Bachmann1, Quentin Lohmeyer1, Mirko Meboldt1.
Abstract
For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assign-ment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quan-titatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm's performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 hours of eye tracking recording for the total procedure, respectively 1 hour considering human working time only. Together with a real-time capability of the mapping process after completed train-ing, even hours of eye tracking recording can be evaluated efficiently. (Code and video examples have been made available at: https://gitlab.ethz.ch/pdz/cgom.git).Entities:
Keywords: areas of interest; cGOM; gaze mapping; machine learning; mask R-CNN; mobile eye tracking; object detection; tangible objects; usability
Year: 2018 PMID: 33828716 PMCID: PMC7909988 DOI: 10.16910/jemr.11.6.6
Source DB: PubMed Journal: J Eye Mov Res ISSN: 1995-8692 Impact factor: 0.957