Literature DB >> 21475682

Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues.

Alexander R T Gepperth1, Sven Rebhan, Stephan Hasler, Jannik Fritsch.   

Abstract

In this contribution, we present a large-scale hierarchical system for object detection fusing bottom-up (signal-driven) processing results with top-down (model or task-driven) attentional modulation. Specifically, we focus on the question of how the autonomous learning of invariant models can be embedded into a performing system and how such models can be used to define object-specific attentional modulation signals. Our system implements bi-directional data flow in a processing hierarchy. The bottom-up data flow proceeds from a preprocessing level to the hypothesis level where object hypotheses created by exhaustive object detection algorithms are represented in a roughly retinotopic way. A competitive selection mechanism is used to determine the most confident hypotheses, which are used on the system level to train multimodal models that link object identity to invariant hypothesis properties. The top-down data flow originates at the system level, where the trained multimodal models are used to obtain space- and feature-based attentional modulation signals, providing biases for the competitive selection process at the hypothesis level. This results in object-specific hypothesis facilitation/suppression in certain image regions which we show to be applicable to different object detection mechanisms. In order to demonstrate the benefits of this approach, we apply the system to the detection of cars in a variety of challenging traffic videos. Evaluating our approach on a publicly available dataset containing approximately 3,500 annotated video images from more than 1 h of driving, we can show strong increases in performance and generalization when compared to object detection in isolation. Furthermore, we compare our results to a late hypothesis rejection approach, showing that early coupling of top-down and bottom-up information is a favorable approach especially when processing resources are constrained.

Entities:  

Year:  2011        PMID: 21475682      PMCID: PMC3059758          DOI: 10.1007/s12559-010-9092-x

Source DB:  PubMed          Journal:  Cognit Comput        ISSN: 1866-9956            Impact factor:   5.418


  13 in total

1.  Attention to both space and feature modulates neuronal responses in macaque area V4.

Authors:  C J McAdams; J H Maunsell
Journal:  J Neurophysiol       Date:  2000-03       Impact factor: 2.714

2.  Competitive mechanisms subserve attention in macaque areas V2 and V4.

Authors:  J H Reynolds; L Chelazzi; R Desimone
Journal:  J Neurosci       Date:  1999-03-01       Impact factor: 6.167

3.  A neurodynamical cortical model of visual attention and invariant object recognition.

Authors:  Gustavo Deco; Edmund T Rolls
Journal:  Vision Res       Date:  2004-03       Impact factor: 1.886

4.  Learning optimized features for hierarchical models of invariant object recognition.

Authors:  Heiko Wersing; Edgar Körner
Journal:  Neural Comput       Date:  2003-07       Impact factor: 2.026

5.  Time course of attention reveals different mechanisms for spatial and feature-based attention in area V4.

Authors:  Benjamin Y Hayden; Jack L Gallant
Journal:  Neuron       Date:  2005-09-01       Impact factor: 17.173

6.  Modeling feature-based attention as an active top-down inference process.

Authors:  Fred H Hamker
Journal:  Biosystems       Date:  2006-04-07       Impact factor: 1.973

Review 7.  Mechanisms of visual object recognition: monkey and human studies.

Authors:  K Tanaka
Journal:  Curr Opin Neurobiol       Date:  1997-08       Impact factor: 6.627

Review 8.  Neural mechanisms of selective visual attention.

Authors:  R Desimone; J Duncan
Journal:  Annu Rev Neurosci       Date:  1995       Impact factor: 12.449

Review 9.  Computational modelling of visual attention.

Authors:  L Itti; C Koch
Journal:  Nat Rev Neurosci       Date:  2001-03       Impact factor: 34.870

10.  Modeling the influence of task on attention.

Authors:  Vidhya Navalpakkam; Laurent Itti
Journal:  Vision Res       Date:  2005-01       Impact factor: 1.886

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.