Literature DB >> 25448117

A neural model of border-ownership from kinetic occlusion.

Oliver W Layton1, Arash Yazdanbakhsh2.   

Abstract

Camouflaged animals that have very similar textures to their surroundings are difficult to detect when stationary. However, when an animal moves, humans readily see a figure at a different depth than the background. How do humans perceive a figure breaking camouflage, even though the texture of the figure and its background may be statistically identical in luminance? We present a model that demonstrates how the primate visual system performs figure-ground segregation in extreme cases of breaking camouflage based on motion alone. Border-ownership signals develop as an emergent property in model V2 units whose receptive fields are nearby kinetically defined borders that separate the figure and background. Model simulations support border-ownership as a general mechanism by which the visual system performs figure-ground segregation, despite whether figure-ground boundaries are defined by luminance or motion contrast. The gradient of motion- and luminance-related border-ownership signals explains the perceived depth ordering of the foreground and background surfaces. Our model predicts that V2 neurons, which are sensitive to kinetic edges, are selective to border-ownership (magnocellular B cells). A distinct population of model V2 neurons is selective to border-ownership in figures defined by luminance contrast (parvocellular B cells). B cells in model V2 receive feedback from neurons in V4 and MT with larger receptive fields to bias border-ownership signals toward the figure. We predict that neurons in V4 and MT sensitive to kinetically defined figures play a crucial role in determining whether the foreground surface accretes, deletes, or produces a shearing motion with respect to the background.
Copyright © 2014 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Accretion/deletion; Border-ownership; Figure–ground; Inter-areal connection; Kinetic edge; Motion

Mesh:

Year:  2014        PMID: 25448117     DOI: 10.1016/j.visres.2014.11.002

Source DB:  PubMed          Journal:  Vision Res        ISSN: 0042-6989            Impact factor:   1.886


  4 in total

Review 1.  Efficient Temporal Coding in the Early Visual System: Existing Evidence and Future Directions.

Authors:  Byron H Price; Jeffrey P Gavornik
Journal:  Front Comput Neurosci       Date:  2022-07-04       Impact factor: 3.387

2.  Geometric figure-ground cues override standard depth from accretion-deletion.

Authors:  Ömer Daglar Tanrikulu; Vicky Froyen; Jacob Feldman; Manish Singh
Journal:  J Vis       Date:  2016       Impact factor: 2.240

3.  Sensorimotor Self-organization via Circular-Reactions.

Authors:  Dongcheng He; Haluk Ogmen
Journal:  Front Neurorobot       Date:  2021-12-13       Impact factor: 2.650

4.  Visual illusion susceptibility in autism: A neural model.

Authors:  Sangwook Park; Basilis Zikopoulos; Arash Yazdanbakhsh
Journal:  Eur J Neurosci       Date:  2022-06-22       Impact factor: 3.698

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.