Literature DB >> 25175115

Local spectral anisotropy is a valid cue for figure-ground organization in natural scenes.

Sudarshan Ramenahalli1, Stefan Mihalas2, Ernst Niebur3.   

Abstract

An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination of which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which we show to differ between figure and ground. Image patches are extracted from natural scenes from two standard image sets along the boundaries of objects and spectral analysis is performed separately on figure and ground. On the figure side, oriented spectral power orthogonal to the occlusion boundary significantly exceeds that parallel to the boundary. This "spectral anisotropy" is present only for higher spatial frequencies, and absent on the ground side. The difference in spectral anisotropy between the two sides of an occlusion border predicts which is the figure and which the background with an accuracy exceeding 60% per patch. Spectral anisotropy of close-by locations along the boundary co-varies but is largely independent over larger distances which allows to combine results from different image regions. Given the low cost of this strictly local computation, we propose that spectral anisotropy along occlusion boundaries is a valuable cue for figure-ground segregation. A data base of images and extracted patches labeled for figure and ground is made freely available.
Copyright © 2014 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Berkeley Segmentation Data Set BSDS300; Figure–ground organization; LabelMe; Local cues; Spatial frequency power

Mesh:

Year:  2014        PMID: 25175115      PMCID: PMC4710966          DOI: 10.1016/j.visres.2014.08.012

Source DB:  PubMed          Journal:  Vision Res        ISSN: 0042-6989            Impact factor:   1.886


  22 in total

1.  Neural correlates for perception of 3D surface orientation from texture gradient.

Authors:  Ken-Ichiro Tsutsui; Hideo Sakata; Tomoka Naganuma; Masato Taira
Journal:  Science       Date:  2002-10-11       Impact factor: 47.728

2.  Animal detection in natural scenes: critical features revisited.

Authors:  Felix A Wichmann; Jan Drewes; Pedro Rosas; Karl R Gegenfurtner
Journal:  J Vis       Date:  2010-04-15       Impact factor: 2.240

3.  Figure and ground in the visual cortex: v2 combines stereoscopic cues with gestalt rules.

Authors:  Fangtu T Qiu; Rüdiger von der Heydt
Journal:  Neuron       Date:  2005-07-07       Impact factor: 17.173

4.  Surrounding suppression and facilitation in the determination of border ownership.

Authors:  Ko Sakai; Haruka Nishimura
Journal:  J Cogn Neurosci       Date:  2006-04       Impact factor: 3.225

5.  A neural model of figure-ground organization.

Authors:  Edward Craft; Hartmut Schütze; Ernst Niebur; Rüdiger von der Heydt
Journal:  J Neurophysiol       Date:  2007-04-18       Impact factor: 2.714

6.  Comparison of interpolating methods for image resampling.

Authors:  J Parker; R V Kenyon; D E Troxel
Journal:  IEEE Trans Med Imaging       Date:  1983       Impact factor: 10.048

7.  Local figure-ground cues are valid for natural images.

Authors:  Charless C Fowlkes; David R Martin; Jitendra Malik
Journal:  J Vis       Date:  2007-06-08       Impact factor: 2.240

8.  Make3D: learning 3D scene structure from a single still image.

Authors:  Ashutosh Saxena; Min Sun; Andrew Y Ng
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2009-05       Impact factor: 6.226

9.  Relations between the statistics of natural images and the response properties of cortical cells.

Authors:  D J Field
Journal:  J Opt Soc Am A       Date:  1987-12       Impact factor: 2.129

10.  Spatial frequency differences can determine figure-ground organization.

Authors:  Victor Klymenko; Naomi Weisstein
Journal:  J Exp Psychol Hum Percept Perform       Date:  1986-08       Impact factor: 3.332

View more
  7 in total

1.  Figure-ground responsive fields of monkey V4 neurons estimated from natural image patches.

Authors:  Kouji Kimura; Atsushi Kodama; Yukako Yamane; Ko Sakai
Journal:  PLoS One       Date:  2022-06-16       Impact factor: 3.752

2.  A proto-object based saliency model in three-dimensional space.

Authors:  Brian Hu; Ralinkae Kane-Jackson; Ernst Niebur
Journal:  Vision Res       Date:  2016-01-19       Impact factor: 1.886

3.  A conceptual framework of computations in mid-level vision.

Authors:  Jonas Kubilius; Johan Wagemans; Hans P Op de Beeck
Journal:  Front Comput Neurosci       Date:  2014-12-12       Impact factor: 2.380

4.  Population coding of figure and ground in natural image patches by V4 neurons.

Authors:  Yukako Yamane; Atsushi Kodama; Motofumi Shishikura; Kouji Kimura; Hiroshi Tamura; Ko Sakai
Journal:  PLoS One       Date:  2020-06-26       Impact factor: 3.240

5.  Figure-Ground Organization in Natural Scenes: Performance of a Recurrent Neural Model Compared with Neurons of Area V2.

Authors:  Brian Hu; Rüdiger von der Heydt; Ernst Niebur
Journal:  eNeuro       Date:  2019-06-28

6.  Analysis of spiking synchrony in visual cortex reveals distinct types of top-down modulation signals for spatial and object-based attention.

Authors:  Nobuhiko Wagatsuma; Brian Hu; Rüdiger von der Heydt; Ernst Niebur
Journal:  PLoS Comput Biol       Date:  2021-03-25       Impact factor: 4.475

7.  Distinguishing shadows from surface boundaries using local achromatic cues.

Authors:  Christopher DiMattina; Josiah J Burnham; Betul N Guner; Haley B Yerxa
Journal:  PLoS Comput Biol       Date:  2022-09-14       Impact factor: 4.779

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.