Literature DB >> 30740186

A Generative Model of Cognitive State from Task and Eye Movements.

W Joseph MacInnes1, Amelia R Hunt2, Alasdair D F Clarke3, Michael D Dodd4.   

Abstract

The early eye tracking studies of Yarbus provided descriptive evidence that an observer's task influences patterns of eye movements, leading to the tantalizing prospect that an observer's intentions could be inferred from their saccade behavior. We investigate the predictive value of task and eye movement properties by creating a computational cognitive model of saccade selection based on instructed task and internal cognitive state using a Dynamic Bayesian Network (DBN). Understanding how humans generate saccades under different conditions and cognitive sets links recent work on salience models of low-level vision with higher level cognitive goals. This model provides a Bayesian, cognitive approach to top-down transitions in attentional set in pre-frontal areas along with vector-based saccade generation from the superior colliculus. Our approach is to begin with eye movement data that has previously been shown to differ across task. We first present an analysis of the extent to which individual saccadic features are diagnostic of an observer's task. Second, we use those features to infer an underlying cognitive state that potentially differs from the instructed task. Finally, we demonstrate how changes of cognitive state over time can be incorporated into a generative model of eye movement vectors without resorting to an external decision homunculus. Internal cognitive state frees the model from the assumption that instructed task is the only factor influencing observers' saccadic behavior. While the inclusion of hidden temporal state does not improve the classification accuracy of the model, it does allow accurate prediction of saccadic sequence results observed in search paradigms. Given the generative nature of this model, it is capable of saccadic simulation in real time. We demonstrated that the properties from its generated saccadic vectors closely match those of human observers given a particular task and cognitive state. Many current models of vision focus entirely on bottom-up salience to produce estimates of spatial "areas of interest" within a visual scene. While a few recent models do add top-down knowledge and task information, we believe our contribution is important in three key ways. First, we incorporate task as learned attentional sets that are capable of self-transition given only information available to the visual system. This matches influential theories of bias signals by (Miller and Cohen Annu Rev Neurosci 24:167-202, 2001) and implements selection of state without simply shifting the decision to an external homunculus. Second, our model is generative and capable of predicting sequence artifacts in saccade generation like those found in visual search. Third, our model generates relative saccadic vector information as opposed to absolute spatial coordinates. This matches more closely the internal saccadic representations as they are generated in the superior colliculus.

Entities:  

Keywords:  Cognitive model; Cognitive state; Dynamic Bayesian network; Eye movements; Task; Temporal model

Year:  2018        PMID: 30740186      PMCID: PMC6367733          DOI: 10.1007/s12559-018-9558-9

Source DB:  PubMed          Journal:  Cognit Comput        ISSN: 1866-9956            Impact factor:   5.418


  51 in total

Review 1.  An integrative theory of prefrontal cortex function.

Authors:  E K Miller; J D Cohen
Journal:  Annu Rev Neurosci       Date:  2001       Impact factor: 12.449

2.  Inhibition of return.

Authors: 
Journal:  Trends Cogn Sci       Date:  2000-04       Impact factor: 20.229

3.  A model of saccade initiation based on the competitive integration of exogenous and endogenous signals in the superior colliculus.

Authors:  T P Trappenberg; M C Dorris; D P Munoz; R M Klein
Journal:  J Cogn Neurosci       Date:  2001-02-15       Impact factor: 3.225

4.  Concurrent processing of saccades in visual search.

Authors:  R M McPeek; A A Skavenski; K Nakayama
Journal:  Vision Res       Date:  2000       Impact factor: 1.886

Review 5.  In what ways do eye movements contribute to everyday activities?

Authors:  M F Land; M Hayhoe
Journal:  Vision Res       Date:  2001       Impact factor: 1.886

6.  Interaction of the frontal eye field and superior colliculus for saccade generation.

Authors:  D P Hanes; R H Wurtz
Journal:  J Neurophysiol       Date:  2001-02       Impact factor: 2.714

Review 7.  Control of goal-directed and stimulus-driven attention in the brain.

Authors:  Maurizio Corbetta; Gordon L Shulman
Journal:  Nat Rev Neurosci       Date:  2002-03       Impact factor: 34.870

8.  Prefrontal regions play a predominant role in imposing an attentional 'set': evidence from fMRI.

Authors:  M T Banich; M P Milham; R A Atchley; N J Cohen; A Webb; T Wszalek; A F Kramer; Z Liang; V Barad; D Gullett; C Shah; C Brown
Journal:  Brain Res Cogn Brain Res       Date:  2000-09

9.  Saccade target selection in visual search: the effect of information from the previous fixation.

Authors:  J M Findlay; V Brown; I D Gilchrist
Journal:  Vision Res       Date:  2001-01       Impact factor: 1.886

Review 10.  Computational modelling of visual attention.

Authors:  L Itti; C Koch
Journal:  Nat Rev Neurosci       Date:  2001-03       Impact factor: 34.870

View more
  2 in total

1.  Modeling eye movement in dynamic interactive tasks for maximizing situation awareness based on Markov decision process.

Authors:  Shuo Ma; Jianbin Guo; Shengkui Zeng; Haiyang Che; Xing Pan
Journal:  Sci Rep       Date:  2022-08-02       Impact factor: 4.996

2.  Machine learning-based classification of viewing behavior using a wide range of statistical oculomotor features.

Authors:  Timo Kootstra; Jonas Teuwen; Jeroen Goudsmit; Tanja Nijboer; Michael Dodd; Stefan Van der Stigchel
Journal:  J Vis       Date:  2020-09-02       Impact factor: 2.240

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.