| Literature DB >> 29339790 |
N Sankaran1,2, W F Thompson3,4, S Carlile5, T A Carlson3,6.
Abstract
In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to "decode" the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain's representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy, whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.Entities:
Mesh:
Year: 2018 PMID: 29339790 PMCID: PMC5770452 DOI: 10.1038/s41598-018-19222-3
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Experimental Paradigm. Each trial consisted of a four-chord tonal context followed by a single probe-tone. Stimuli were piano tones. Each chord and tone was 650 ms in duration and a silent interval of 650 ms separated the last chord and probe-tone. Contexts were either in the key of C major (top) or F# major (bottom). Subsequent probe-tones were C4, G4, F#4 or C#4. When the context was in the key of C major, the former two probe-tones were “in-key” (tonic & dominant), while the latter two probe-tones were “out-of-key” (augmented 4th & minor 2nd). When the context was in the key of F# major, this mapping reversed.
Figure 2Decoding pitch-class from MEG activity. Neural distinctions were probed at each time-point from −100 ms to 1000 ms relative to onset of each probe-tone. Performance is averaged across both tonal contexts (C major and F# major) and all subjects. Statistically significant time points are indicated by the black points underneath each curve (p < 0.01; Wilcoxon sign-rank test, corrected by controlling the false discovery rate). Shaded regions indicate standard errors. (A) Classification accuracy for discriminating between in-key and out-of-key tones. (B) Accuracy for decoding the two in-key tones (tonic/dominant). (C) Accuracy for decoding the two out-of-key tones (minor 2nd/augmented 4th). (D) Time-averaged decoding performance for each of the distinctions assessed in (A–C) over 250–600 ms (the period of maximal context-related effects; see section 2.4). Significance is indicated by asterisks where *p < 0.05; **p < 0.01 (Bonferroni corrected Wilcoxon sign-rank test).
Figure 3Representational similarity analysis of pitch-class. (a) Neural dissimilarities summarized in a time-averaged Representational Dissimilarity Matrix (RDM). (b) Multidimensional scaling (MDS) applied to the time-averaged neural RDM provides an intuitive visualization of the representational structure of musical pitch in the brain. (c) The time-varying neural structure is indexed with a new RDM at each time-point and compared with three candidate models. (d) Time-varying correlation (Kendall’s TauA rank-order) between the observed neural structure and each of the candidate models. Significance is indicated by the points below the curves (p < 0.05; randomization test; FDR corrected). Shaded regions indicate standard errors. (e) Time-averaging the neural-to-candidate correlations over 250–600 ms reveals that neural dissimilarities are significantly correlated with the Tonal Hierarchy (p < 0.05; randomization test), with the average correlation closely tracking the noise-ceiling (indicated by the shaded region). Bars indicate standard errors.
Figure 4Decoding acoustically identical probe-tones. Applying MVPA to decode the pitch-class from physically identical tones that were preceded by different tonal contexts.