| Literature DB >> 29604082 |
Uta Noppeney1, Hwee Ling Lee2.
Abstract
To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses.Keywords: Bayesian causal inference; audiovisual; music; prediction error; speech
Year: 2018 PMID: 29604082 DOI: 10.1111/nyas.13615
Source DB: PubMed Journal: Ann N Y Acad Sci ISSN: 0077-8923 Impact factor: 5.691