| Literature DB >> 9104006 |
S Grossberg1, I Boardman, M Cohen.
Abstract
What is the neural representation of a speech code as it evolves in time? A neural model simulates data concerning segregation and integration of phonetic percepts. Hearing two phonetically related stops in a VC-CV pair (V = vowel; C = consonant) requires 150 ms more closure time than hearing two phonetically different stops in a VC1-C2V pair. Closure time also varies with long-term stimulus rate. The model simulates rate-dependent category boundaries that emerge from feedback interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech code is a resonant wave. It emerges after bottom-up signals from the working memory select list chunks which read out top-down expectations that amplify and focus attention on consistent working memory items. In VC1-C2V pairs, resonance is reset by mismatch of C2 with the C1 expectation. In VC-CV pairs, resonance prolongs a repeated C.Entities:
Mesh:
Year: 1997 PMID: 9104006 DOI: 10.1037//0096-1523.23.2.481
Source DB: PubMed Journal: J Exp Psychol Hum Percept Perform ISSN: 0096-1523 Impact factor: 3.332