| Literature DB >> 32641708 |
Nicole Calma-Roddin1,2, John E Drury3.
Abstract
Studies of the relationship of language and music have suggested these two systems may share processing resources involved in the computation/maintenance of abstract hierarchical structure (syntax). One type of evidence comes from ERP interference studies involving concurrent language/music processing showing interaction effects when both processing streams are simultaneously perturbed by violations (e.g., syntactically incorrect words paired with incongruent completion of a chord progression). Here, we employ this interference methodology to target the mechanisms supporting long term memory (LTM) access/retrieval in language and music. We used melody stimuli from previous work showing out-of-key or unexpected notes may elicit a musical analogue of language N400 effects, but only for familiar melodies, and not for unfamiliar ones. Target notes in these melodies were time-locked to visually presented target words in sentence contexts manipulating lexical/conceptual semantic congruity. Our study succeeded in eliciting expected N400 responses from each cognitive domain independently. Among several new findings we argue to be of interest, these data demonstrate that: (i) language N400 effects are delayed in onset by concurrent music processing only when melodies are familiar, and (ii) double violations with familiar melodies (but not with unfamiliar ones) yield a sub-additive N400 response. In addition: (iii) early negativities (RAN effects), which previous work has connected to musical syntax, along with the music N400, were together delayed in onset for familiar melodies relative to the timing of these effects reported in the previous music-only study using these same stimuli, and (iv) double violation cases involving unfamiliar/novel melodies also delayed the RAN effect onset. These patterns constitute the first demonstration of N400 interference effects across these domains and together contribute previously undocumented types of interactions to the available pool of findings relevant to understanding whether language and music may rely on shared underlying mechanisms.Entities:
Mesh:
Year: 2020 PMID: 32641708 PMCID: PMC7343814 DOI: 10.1038/s41598-020-66732-0
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 2Grand average ERPs (all conditions) and scalp difference maps (Violation minus Control). Music violations shown top panel; Language and double violations in bottom panel. Familiar (lefthand plots) and Unfamiliar (righthand) melody conditions plotted separately. Note that independent music and language N400 responses would be expected to yield a larger such effect for the familiar melody double-violation condition (solid green, bottom left) due to additive combination with the music N400. Instead, a sub-additive pattern obtains, indicated by the very similar N400s for language violations whether concurrent music violations obtained (green) or not (blue).
Figure 1Behavioral responses for end-of-sentence judgment (left) and Name That Tune task (right). Error bars represent 95% CIs.
Figure 3Difference waves for all violations for familiar (left) and unfamiliar (center) melody conditions plotted against predicted additive response profile for double violation conditions; N400 responses for language and double violations superimposed for familiar and unfamiliar melody conditions (right). Note the double-violation (green trace) is sub-additive – corresponding to the Correctness x Key interaction (see also Fig. 2, and main text report of 450–550 ms time-window findings).