Literature DB >> 21393388

Learning a saliency map using fixated locations in natural scenes.

Qi Zhao1, Christof Koch.   

Abstract

Inspired by the primate visual system, computational saliency models decompose visual input into a set of feature maps across spatial scales in a number of pre-specified channels. The outputs of these feature maps are summed to yield the final saliency map. Here we use a least square technique to learn the weights associated with these maps from subjects freely fixating natural scenes drawn from four recent eye-tracking data sets. Depending on the data set, the weights can be quite different, with the face and orientation channels usually more important than color and intensity channels. Inter-subject differences are negligible. We also model a bias toward fixating at the center of images and consider both time-varying and constant factors that contribute to this bias. To compensate for the inadequacy of the standard method to judge performance (area under the ROC curve), we use two other metrics to comprehensively assess performance. Although our model retains the basic structure of the standard saliency model, it outperforms several state-of-the-art saliency algorithms. Furthermore, the simple structure makes the results applicable to numerous studies in psychophysics and physiology and leads to an extremely easy implementation for real-world applications.

Mesh:

Year:  2011        PMID: 21393388     DOI: 10.1167/11.3.9

Source DB:  PubMed          Journal:  J Vis        ISSN: 1534-7362            Impact factor:   2.240


  26 in total

1.  Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

Authors:  Edmund T Rolls
Journal:  Front Comput Neurosci       Date:  2012-06-19       Impact factor: 2.380

2.  Saliency and saccade encoding in the frontal eye field during natural scene search.

Authors:  Hugo L Fernandes; Ian H Stevenson; Adam N Phillips; Mark A Segraves; Konrad P Kording
Journal:  Cereb Cortex       Date:  2013-07-17       Impact factor: 5.357

3.  Simultaneous modeling of visual saliency and value computation improves predictions of economic choice.

Authors:  R Blythe Towal; Milica Mormann; Christof Koch
Journal:  Proc Natl Acad Sci U S A       Date:  2013-09-09       Impact factor: 11.205

4.  Memory and prediction in natural gaze control.

Authors:  Gabriel Diaz; Joseph Cooper; Mary Hayhoe
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2013-09-09       Impact factor: 6.237

5.  Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task.

Authors:  Gabriel Diaz; Joseph Cooper; Constantin Rothkopf; Mary Hayhoe
Journal:  J Vis       Date:  2013-01-16       Impact factor: 2.240

6.  What do saliency models predict?

Authors:  Kathryn Koehler; Fei Guo; Sheng Zhang; Miguel P Eckstein
Journal:  J Vis       Date:  2014-03-11       Impact factor: 2.240

Review 7.  Eye guidance in natural vision: reinterpreting salience.

Authors:  Benjamin W Tatler; Mary M Hayhoe; Michael F Land; Dana H Ballard
Journal:  J Vis       Date:  2011-05-27       Impact factor: 2.240

8.  Atypical Visual Saliency in Autism Spectrum Disorder Quantified through Model-Based Eye Tracking.

Authors:  Shuo Wang; Ming Jiang; Xavier Morin Duchesne; Elizabeth A Laugeson; Daniel P Kennedy; Ralph Adolphs; Qi Zhao
Journal:  Neuron       Date:  2015-10-22       Impact factor: 17.173

9.  Autism spectrum disorder, but not amygdala lesions, impairs social attention in visual search.

Authors:  Shuo Wang; Juan Xu; Ming Jiang; Qi Zhao; Rene Hurlemann; Ralph Adolphs
Journal:  Neuropsychologia       Date:  2014-09-08       Impact factor: 3.139

10.  Art expertise reduces influence of visual salience on fixation in viewing abstract-paintings.

Authors:  Naoko Koide; Takatomi Kubo; Satoshi Nishida; Tomohiro Shibata; Kazushi Ikeda
Journal:  PLoS One       Date:  2015-02-06       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.