| Literature DB >> 31663229 |
Matthias Staib1,2, Aslan Abivardi1,2, Dominik R Bach1,2,3.
Abstract
Auditory cortex is required for discriminative fear conditioning beyond the classical amygdala microcircuit, but its precise role is unknown. It has previously been suggested that Heschl's gyrus, which includes primary auditory cortex (A1), but also other auditory areas, encodes threat predictions during presentation of conditioned stimuli (CS) consisting of monophones, or frequency sweeps. The latter resemble natural prosody and contain discriminative spectro-temporal information. Here, we use functional magnetic resonance imaging (fMRI) in humans to address CS encoding in A1 for stimuli that contain only spectral but no temporal discriminative information. Two musical chords (complex) or two monophone tones (simple) were presented in a signaled reinforcement context (reinforced CS+ and nonreinforced CS-), or in a different context without reinforcement (neutral sounds, NS1 and NS2), with an incidental sound detection task. CS/US association encoding was quantified by the increased discriminability of BOLD patterns evoked by CS+/CS-, compared to NS pairs with similar physical stimulus differences and task demands. A1 was defined on a single-participant level and based on individual anatomy. We find that in A1, discriminability of CS+/CS- was higher than for NS1/NS2. This representation of unconditioned stimulus (US) prediction was of comparable magnitude for both types of sounds. We did not observe such encoding outside A1. Different from frequency sweeps investigated previously, musical chords did not share representations of US prediction with monophone sounds. To summarize, our findings suggest decodable representation of US predictions in A1, for various types of CS, including musical chords that contain no temporal discriminative information.Entities:
Keywords: associative learning; discriminative fear conditioning; emotional learning; multivariate pattern analysis; spectrotemporal information; threat conditioning; threat representation
Mesh:
Year: 2019 PMID: 31663229 PMCID: PMC7268068 DOI: 10.1002/hbm.24846
Source DB: PubMed Journal: Hum Brain Mapp ISSN: 1065-9471 Impact factor: 5.399
Figure 1(a) We compared “simple” monophone sounds with “complex” triads, in two different contexts: A reinforcement context with CS+ (reinforced) and CS− (nonreinforced), and a nonreinforcement context with neutral sounds (NS) in which participants were explicitly instructed about the absence of the US. Frequencies are shown for each bass tone. Dashed lines signify the root of each triad that served as discriminative spectral feature between two chords. (b) Block order in the fMRI experiment, and intra‐trial procedure
Reaction time statistics
| Marginal means (SEM) in ms | CS− | CS+ | NS1 | NS2 |
|---|---|---|---|---|
| Simple | 737 (36) | 852 (28) | 782 (37) | 868 (43) |
| Complex | 776 (40) | 844 (48) | 850 (47) | 832 (48) |
Note: Participants were instructed to respond quickly, within a response time limit of 3 s.
Figure 2Fear learning strength, quantified as CS/NS‐associated sympathetic arousal, that is, amplitude of estimated central input into the sudomotor/sweat gland system, measured by skin conductance responses. Error bars: Group‐level SEM
ANOVA on CS/NS‐associated sympathetic arousal
| Behavioral experiment | fMRI experiment | |||||
|---|---|---|---|---|---|---|
|
|
|
|
|
|
| |
| CS | 1, 3,261 | 84.1 | <.001 | 1, 3,352 | 43.8 | <.001 |
| Context | 1, 3,261 | 997.1 | <.001 | 1, 3,352 | 183.9 | <.001 |
| Complexity | 1, 3,261 | 5.0 | .026 | 1, 3,352 | 46.6 | <.001 |
| CS × context | 1, 3,261 | 68.4 | <.001 | 1, 3,352 | 14.4 | <.001 |
| CS × complexity | 1, 3,261 | <1 | .37 | 1, 3,352 | 2.3 | .12 |
| Context × complexity | 1, 3,261 | <1 | .51 | 1, 3,352 | 12.7 | <.001 |
| CS × context × complexity | 1, 3,261 | <1 | .43 | 1, 3,352 | <1 | .48 |
Note: The CS factor has levels CS+/response matched NS, and CS−/response‐matched NS. Results demonstrate similar fear learning (main effect CS and CS × context interaction) in both complexity conditions.
Figure 3Mass‐univariate contrast of all stimuli versus baseline (main effect of sound) and US+ > US− after CS+ (p < .05 FWE‐corrected). Within the field of view, sounds evoke BOLD signal across temporal plane and superior temporal gyrus as well as in thalamic structures, and US evoke BOLD signal in insula and amygdala
Figure 4(a) Discriminability (mean ± between‐participant SEM above baseline performance estimated in a random permutation test) of multivoxel BOLD patterns to CS+/CS− or NS1/NS2 within A1. CS is better distinguished than NS across simple (monophone) and complex (triads) sounds. (b) Region of interest definition within Heschl's gyrus: Probability map of MNI‐normalized mask across participants, projected onto flattened cortex template. White dashed boundaries outline the atlas‐based mask of Heschl's gyrus for comparison
ANOVA results for classification of CS+/CS−
| A1 | Heschl's gyrus | |||||
|---|---|---|---|---|---|---|
| Effect |
|
|
|
|
|
|
| Context | 1, 119 | 30.9 | <.001 | 1, 133 | 6.2 | .014 |
| Complexity | 1, 119 | 10.8 | .001 | 1, 133 | 10.6 | .001 |
| Hemisphere | 1, 119 | 1.9 | .17 | 1, 133 | <1 | .42 |
| Context × complexity | 1, 119 | <1 | .39 | 1, 133 | 1.5 | .29 |
| Context × hemisphere | 1, 119 | 2.4 | .12 | 1, 133 | <1 | .72 |
| Complexity × hemisphere | 1, 119 | <1 | .70 | 1, 133 | <1 | .69 |
| Context × complexity × hemisphere | 1, 119 | <1 | .73 | 1, 133 | <1 | .30 |
Note: A1: native‐space definition of A1 from individual anatomy. Heschl's gyrus: Probabilistic atlas‐based mask for comparison with previous work.