| Literature DB >> 28242517 |
Mirella Manfredi1, Neil Cohn2, Marta Kutas3.
Abstract
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms.Entities:
Keywords: Comics; Event-related potentials; Late positivity; N400; Onomatopoeic words; Visual language; Visual narrative
Mesh:
Year: 2017 PMID: 28242517 PMCID: PMC5465314 DOI: 10.1016/j.bandl.2017.02.001
Source DB: PubMed Journal: Brain Lang ISSN: 0093-934X Impact factor: 2.381