Literature DB >> 23142707

When more is less: increasing allocentric visual information can switch visual-proprioceptive combination from an optimal to sub-optimal process.

Patrick A Byrne1, Denise Y P Henriques.   

Abstract

When reaching for an object in the environment, the brain often has access to multiple independent estimates of that object's location. For example, if someone places their coffee cup on a table, then later they know where it is because they see it, but also because they remember how their reaching limb was oriented when they placed the cup. Intuitively, one would expect more accurate reaches if either of these estimates were improved (e.g., if a light were turned on so the cup were more visible). It is now well-established that the brain tends to combine two or more estimates about the same stimulus as a maximum-likelihood estimator (MLE), which is the best thing to do when estimates are unbiased. Even in the presence of small biases, relying on the MLE rule is still often better than choosing a single estimate. For this work, we designed a reaching task in which human subjects could integrate proprioceptive and allocentric (landmark-relative) visual information to reach for a remembered target. Even though both of these modalities contain some level of bias, we demonstrate via simulation that our subjects should use an MLE rule in preference to relying on one modality or the other in isolation. Furthermore, we show that when visual information is poor, subjects do, indeed, combine information in this way. However, when we improve the quality of visual information, subjects counter-intuitively switch to a sub-optimal strategy that occasionally includes reliance on a single modality.
Copyright © 2012 Elsevier Ltd. All rights reserved.

Entities:  

Mesh:

Year:  2012        PMID: 23142707     DOI: 10.1016/j.neuropsychologia.2012.10.008

Source DB:  PubMed          Journal:  Neuropsychologia        ISSN: 0028-3932            Impact factor:   3.139


  4 in total

1.  No effect of delay on the spatial representation of serial reach targets.

Authors:  Immo Schütz; Denise Y P Henriques; Katja Fiehler
Journal:  Exp Brain Res       Date:  2015-01-20       Impact factor: 1.972

Review 2.  Are All Spatial Reference Frames Egocentric? Reinterpreting Evidence for Allocentric, Object-Centered, or World-Centered Reference Frames.

Authors:  Flavia Filimon
Journal:  Front Hum Neurosci       Date:  2015-12-09       Impact factor: 3.169

3.  Motor learning without moving: Proprioceptive and predictive hand localization after passive visuoproprioceptive discrepancy training.

Authors:  Ahmed A Mostafa; Bernard Marius 't Hart; Denise Y P Henriques
Journal:  PLoS One       Date:  2019-08-29       Impact factor: 3.240

4.  Experimentally disambiguating models of sensory cue integration.

Authors:  Peter Scarfe
Journal:  J Vis       Date:  2022-01-04       Impact factor: 2.240

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.