Literature DB >> 17688904

Predicting visual fixations on video based on low-level visual features.

Olivier Le Meur1, Patrick Le Callet, Dominique Barba.   

Abstract

To what extent can a computational model of the bottom-up visual attention predict what an observer is looking at? What is the contribution of the low-level visual features in the attention deployment? To answer these questions, a new spatio-temporal computational model is proposed. This model incorporates several visual features; therefore, a fusion algorithm is required to combine the different saliency maps (achromatic, chromatic and temporal). To quantitatively assess the model performances, eye movements were recorded while naive observers viewed natural dynamic scenes. Four completing metrics have been used. In addition, predictions from the proposed model are compared to the predictions from a state of the art model [Itti's model (Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254-1259)] and from three non-biologically plausible models (uniform, flicker and centered models). Regardless of the metric used, the proposed model shows significant improvement over the selected benchmarking models (except the centered model). Conclusions are drawn regarding both the influence of low-level visual features over time and the central bias in an eye tracking experiment.

Mesh:

Year:  2007        PMID: 17688904     DOI: 10.1016/j.visres.2007.06.015

Source DB:  PubMed          Journal:  Vision Res        ISSN: 0042-6989            Impact factor:   1.886


  22 in total

1.  Age differences in online processing of video: an eye movement study.

Authors:  Heather L Kirkorian; Daniel R Anderson; Rachel Keen
Journal:  Child Dev       Date:  2012-01-30

Review 2.  Eye movements: the past 25 years.

Authors:  Eileen Kowler
Journal:  Vision Res       Date:  2011-01-13       Impact factor: 1.886

3.  Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search.

Authors:  Pavan Ramkumar; Hugo Fernandes; Konrad Kording; Mark Segraves
Journal:  J Vis       Date:  2015-03-26       Impact factor: 2.240

4.  Effect of sequential video shot comprehensibility on attentional synchrony: A comparison of children and adults.

Authors:  Heather L Kirkorian; Daniel R Anderson
Journal:  Proc Natl Acad Sci U S A       Date:  2018-10-02       Impact factor: 11.205

5.  Eye movements while viewing narrated, captioned, and silent videos.

Authors:  Nicholas M Ross; Eileen Kowler
Journal:  J Vis       Date:  2013-03-01       Impact factor: 2.240

6.  Temporal eye movement strategies during naturalistic viewing.

Authors:  Helena X Wang; Jeremy Freeman; Elisha P Merriam; Uri Hasson; David J Heeger
Journal:  J Vis       Date:  2012-01-19       Impact factor: 2.240

7.  SUN: Top-down saliency using natural statistics.

Authors:  Christopher Kanan; Mathew H Tong; Lingyun Zhang; Garrison W Cottrell
Journal:  Vis cogn       Date:  2009-08-01

8.  Saliency computation via whitened frequency band selection.

Authors:  Qi Lv; Bin Wang; Liming Zhang
Journal:  Cogn Neurodyn       Date:  2016-01-06       Impact factor: 5.082

9.  Information-theoretic model comparison unifies saliency metrics.

Authors:  Matthias Kümmerer; Thomas S A Wallis; Matthias Bethge
Journal:  Proc Natl Acad Sci U S A       Date:  2015-12-10       Impact factor: 11.205

10.  What do saliency models predict?

Authors:  Kathryn Koehler; Fei Guo; Sheng Zhang; Miguel P Eckstein
Journal:  J Vis       Date:  2014-03-11       Impact factor: 2.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.