| Literature DB >> 23750145 |
David A Havas1, James Matheson.
Abstract
Language can impact emotion, even when it makes no reference to emotion states. For example, reading sentences with positive meanings ("The water park is refreshing on the hot summer day") induces patterns of facial feedback congruent with the sentence emotionality (smiling), whereas sentences with negative meanings induce a frown. Moreover, blocking facial afference with botox selectively slows comprehension of emotional sentences. Therefore, theories of cognition should account for emotion-language interactions above the level of explicit emotion words, and the role of peripheral feedback in comprehension. For this special issue exploring frontiers in the role of the body and environment in cognition, we propose a theory in which facial feedback provides a context-sensitive constraint on the simulation of actions described in language. Paralleling the role of emotions in real-world behavior, our account proposes that (1) facial expressions accompany sudden shifts in wellbeing as described in language; (2) facial expressions modulate emotional action systems during reading; and (3) emotional action systems prepare the reader for an effective simulation of the ensuing language content. To inform the theory and guide future research, we outline a framework based on internal models for motor control. To support the theory, we assemble evidence from diverse areas of research. Taking a functional view of emotion, we tie the theory to behavioral and neural evidence for a role of facial feedback in cognition. Our theoretical framework provides a detailed account that can guide future research on the role of emotional feedback in language processing, and on interactions of language and emotion. It also highlights the bodily periphery as relevant to theories of embodied cognition.Entities:
Keywords: botox; constraint satisfaction; embodied cognition; emotion; facial feedback; language comprehension; motor control; simulation
Year: 2013 PMID: 23750145 PMCID: PMC3664318 DOI: 10.3389/fpsyg.2013.00294
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Figure 1Facial EMG change in microvolts from baseline (1000 ms before sentence onset) for emotional sentences across sentence quarters, and overall (inset; vertical bars represent mean EMG change during sentence presentation, and horizontal bars indicate significant comparisons) from Havas et al., . Activity in muscles for frowning (corrugator) and smiling (orbicularis and zygomaticus) diverges rapidly after onset of happy, angry, and sad, sentences. The fourth sentence quarter corresponds to participants' pressing of a button to indicate they understood the sentence. Sentence presentation durations have been standardized.
Figure 2A simplified internal models framework based on Glenberg and Gallese’s (. Here, we add a signal for learning to predict the reward of actions. Multiple modules, composed of paired predictors and controllers, anticipate the sensory and affective consequences of actions. Prediction error, derived from the actual sensory and affective consequences, drives learning in the controller and adjusts the responsibility for a particular module. As in Glenberg and Gallese's model, actual motor output is a weighted function of modules, higher-level modules provide hierarchical control of goal-based actions in the form of prior probabilities that influence lower-level module selection, and a gain controller is added for simulation in language comprehension.