Literature DB >> 16640265

A coherent computational approach to model bottom-up visual attention.

Olivier Le Meur1, Patrick Le Callet, Dominique Barba, Dominique Thoreau.   

Abstract

Visual attention is a mechanism which filters out redundant visual information and detects the most relevant parts of our visual field. Automatic determination of the most visually relevant areas would be useful in many applications such as image and video coding, watermarking, video browsing, and quality assessment. Many research groups are currently investigating computational modeling of the visual attention system. The first published computational models have been based on some basic and well-understood Human Visual System (HVS) properties. These models feature a single perceptual layer that simulates only one aspect of the visual system. More recent models integrate complex features of the HVS and simulate hierarchical perceptual representation of the visual input. The bottom-up mechanism is the most occurring feature found in modern models. This mechanism refers to involuntary attention (i.e., salient spatial visual features that effortlessly or involuntary attract our attention). This paper presents a coherent computational approach to the modeling of the bottom-up visual attention. This model is mainly based on the current understanding of the HVS behavior. Contrast sensitivity functions, perceptual decomposition, visual masking, and center-surround interactions are some of the features implemented in this model. The performances of this algorithm are assessed by using natural images and experimental measurements from an eye-tracking system. Two adequate well-known metrics (correlation coefficient and Kullbacl-Leibler divergence) are used to validate this model. A further metric is also defined. The results from this model are finally compared to those from a reference bottom-up model.

Entities:  

Mesh:

Year:  2006        PMID: 16640265     DOI: 10.1109/TPAMI.2006.86

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  18 in total

1.  What do saliency models predict?

Authors:  Kathryn Koehler; Fei Guo; Sheng Zhang; Miguel P Eckstein
Journal:  J Vis       Date:  2014-03-11       Impact factor: 2.240

2.  Predicting Eye Fixations on Complex Visual Stimuli Using Local Symmetry.

Authors:  Gert Kootstra; Bart de Boer; Lambert R B Schomaker
Journal:  Cognit Comput       Date:  2011-01-12       Impact factor: 5.418

3.  Eye movement prediction and variability on natural video data sets.

Authors:  Michael Dorr; Eleonora Vig; Erhardt Barth
Journal:  Vis cogn       Date:  2012-03-26

4.  Predicting the eye fixation locations in the gray scale images in the visual scenes with different semantic contents.

Authors:  Hassan Zanganeh Momtaz; Mohammad Reza Daliri
Journal:  Cogn Neurodyn       Date:  2015-10-07       Impact factor: 5.082

5.  New insights into ambient and focal visual fixations using an automatic classification algorithm.

Authors:  Brice Follet; Olivier Le Meur; Thierry Baccino
Journal:  Iperception       Date:  2011-10-14

6.  Emergence of visual saliency from natural scenes via context-mediated probability distributions coding.

Authors:  Jinhua Xu; Zhiyong Yang; Joe Z Tsien
Journal:  PLoS One       Date:  2010-12-29       Impact factor: 3.240

7.  A neuromorphic architecture for object recognition and motion anticipation using burst-STDP.

Authors:  Andrew Nere; Umberto Olcese; David Balduzzi; Giulio Tononi
Journal:  PLoS One       Date:  2012-05-15       Impact factor: 3.240

8.  Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition.

Authors:  Na Shu; Zhiyong Gao; Xiangan Chen; Haihua Liu
Journal:  PLoS One       Date:  2015-07-01       Impact factor: 3.240

9.  A neural computational model for bottom-up attention with invariant and overcomplete representation.

Authors:  Qi Zou; Songnian Zhao; Zhe Wang; Yaping Huang
Journal:  BMC Neurosci       Date:  2012-11-29       Impact factor: 3.288

10.  The contributions of image content and behavioral relevancy to overt attention.

Authors:  Selim Onat; Alper Açık; Frank Schumann; Peter König
Journal:  PLoS One       Date:  2014-04-15       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.