| Literature DB >> 27033087 |
Simon Carlile1,2, Alex Fox3, Emily Orchard-Mills3,4, Johahn Leung3, David Alais4.
Abstract
The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between "labeled line" and "opponent process" models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation. Rather than increasing with separation, spatial discrimination thresholds first declined as two-point separation increased before reaching a turning point and increasing thereafter with further separation. This "dipper" function, with a minimum at 6 ° of separation, was seen for regions around the midline as well as for more lateral regions (30 and 45 °). The discrimination thresholds for the binaural localization cues were linear over the same range, so these cannot explain the shape of these functions. These data and a simple computational model indicate that the perception of auditory space involves a local code or multichannel mapping emerging subsequent to the binaural cue coding.Entities:
Keywords: auditory localization; auditory spatial perception; sensory channel processing
Mesh:
Year: 2016 PMID: 27033087 PMCID: PMC4854823 DOI: 10.1007/s10162-016-0560-1
Source DB: PubMed Journal: J Assoc Res Otolaryngol ISSN: 1438-7573