| Literature DB >> 25100973 |
Talia Shrem1, Leon Y Deouell2.
Abstract
Functional magnetic resonance imaging (fMRI) findings suggest that a part of the planum temporale (PT) is involved in representing spatial properties of acoustic information. Here, we tested whether this representation of space is frequency-dependent or generalizes across spectral content, as required from high order sensory representations. Using sounds with two different spectral content and two spatial locations in individually tailored virtual acoustic environment, we compared three conditions in a sparse-fMRI experiment: Single Location, in which two sounds were both presented from one location; Fixed Mapping, in which there was one-to-one mapping between two sounds and two locations; and Mixed Mapping, in which the two sounds were equally likely to appear at either one of the two locations. We surmised that only neurons tuned to both location and frequency should be differentially adapted by the Mixed and Fixed mappings. Replicating our previous findings, we found adaptation to spatial location in the PT. Importantly, activation was higher for Mixed Mapping than for Fixed Mapping blocks, even though the two sounds and the two locations appeared equally in both conditions. These results show that spatially tuned neurons in the human PT are not invariant to the spectral content of sounds.Entities:
Keywords: adaptation; fMRI; sound location; sparse imaging; tonotopy
Year: 2014 PMID: 25100973 PMCID: PMC4106454 DOI: 10.3389/fnhum.2014.00524
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Figure 1Stimuli and design. (A) Outside the scanner, the subject (left panel) was seated in the center of a semicircular array of five loudspeakers positioned 90 cm from the center of the head at ±60°, ±15°, and 0° relative to the midsagittal plane. Two miniature microphones embedded in standard ear plugs were placed in the external auditory canals, pointing outwards, with their front end aligned with the external auditory meati (inset). In the scanner, the individually tailored sounds thus recorded were then presented to subjects (right panel) by earphones. (B) Illustration of the stimulation conditions (see text for details). Note that in all conditions, half the sounds in each block were “high” (F0 = 784 Hz, G note) and half “low” (F0 = 622 Hz, D#). The difference between the sound blocks is in the mapping between high and low (illustrated by the musical notes) and the two sound locations (depicted by red and black notes). The subjects watched a movie and were instructed to ignore the sounds. (C) The three conditions as well as silence blocks were presented in pseudo-random order in a sparse acquisition design. A single EPI volume was acquired in 2.29 s, and 20 sound stimuli were presented within 7.71 intervals between scans, i.e., with no interruption of scanning noise.
Figure 2Effect of the frequency and sound location combinations on BOLD signal. Right: Top—Functional ROI of location-sensitive voxels in the STG, specified by contrasting Mixed Mapping + Fixed Mapping vs. Single Location blocks, p < 0.05, corrected. The color scale represents t-values. Bottom—mean (and standard error of the mean) of beta values difference within this ROI for the contrast Mixed > Fixed. Left: Top—Pre-defined ROI of location sensitive voxels within the STG, based on an independent set of subjects from Deouell et al. (2007). Bottom—mean beta values difference for the contrast Mixed > Fixed within the ROIs, and standard errors of the means within this ROI.
Figure 3Whole brain analysis. Mixed Mapping > Single Location contrast, showing significant voxels in color; p < 0.05, corrected.