Literature DB >> 29235500

The development of spontaneous facial responses to others' emotions in infancy: An EMG study.

Jakob Kaiser1, Maria Magdalena Crespo-Llado2, Chiara Turati3, Elena Geangu4.   

Abstract

Viewing facial expressions often evokes facial responses in the observer. These spontaneous facial reactions (SFRs) are believed to play an important role for social interactions. However, their developmental trajectory and the underlying neurocognitive mechanisms are still little understood. In the current study, 4- and 7-month old infants were presented with facial expressions of happiness, anger, and fear. Electromyography (EMG) was used to measure activation in muscles relevant for forming these expressions: zygomaticus major (smiling), corrugator supercilii (frowning), and frontalis (forehead raising). The results indicated no selective activation of the facial muscles for the expressions in 4-month-old infants. For 7-month-old infants, evidence for selective facial reactions was found especially for happy (leading to increased zygomaticus major activation) and fearful faces (leading to increased frontalis activation), while angry faces did not show a clear differential response. These results suggest that emotional SFRs may be the result of complex neurocognitive mechanisms which lead to partial mimicry but are also likely to be influenced by evaluative processes. Such mechanisms seem to undergo important developments at least until the second half of the first year of life.

Entities:  

Mesh:

Year:  2017        PMID: 29235500      PMCID: PMC5727508          DOI: 10.1038/s41598-017-17556-y

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

Emotional facial expressions are rich and powerful means of communicating information about one’s affective states, as well as about the environment in which we live in. Not surprisingly, by adulthood, we develop high expertise to process facial expressions fast and accurately. A testimony to their importance and saliency is the fact that the perception of emotional faces often elicits emotionally convergent facial responses in the observer. For example, during social interactions, we often respond rapidly with emotional facial expressions which are similar to those we observe in others, such as smiling when we see someone happy. These spontaneous facial responses (SFRs), which are sometimes covert and not visible through direct observation[1,2], nonetheless are thought to play crucial roles in how we communicate and empathise with each other, as well as in establishing cohesive social groups[3,4]. Impairments in these social abilities are usually reported in pathologies characterised by atypical social functioning like autism, conduct disorders, and psychopathy[5,6], and thus understanding the extent to which they are associated with atypical manifestations of emotional SFRs is of high importance. The study of infants’ spontaneous facial responses to others’ emotions is essential in this respect. Infancy is a crucial time period for tuning and optimising the brain circuitry for processing stimuli with socio-emotional relevance, setting the stage for both the refinement of the early acquired social skills and the emergence of new and more complex ones later in life[7-9]. In addition, infancy also provides unique opportunities for studying the SFRs to others’ emotions in relative isolation from the influence of cultural norms and values, as well as symbolic linguistic processing of emotional information. Despite their relevance, the systematic investigation of infants’ facial responses to others’ emotions is limited[10-12]. In order to address this developmental gap, in this study we investigated SFRs to dynamic facial expressions of emotions in 4- and 7-months-old infants using electromyography (EMG). Different neurocognitive mechanisms have been proposed to underlie the SFRs which are congruent with others’ emotional expressions. One view regards them as instances of motor mirroring or motor mimicry, where the observation of others’ facial movements elicits the selective activation of the corresponding muscles in the observer. These responses are thought to be largely automatic, occurring outside the mimicker’s awareness, intention, and control[13,14]. In light of these characteristics, Chartrand and Bargh[15] metaphorically referred to motor mirroring as the ‘chameleon effect’. Motor mimicry relies on perception-action matching mechanisms involving the shared representation of the observed and executed facial actions. At the neural level, the mirroring properties of a cortical network including the inferior frontal, premotor and inferior parietal cortex (mirror neuron system - MNS) are thought to be involved in implementing the perceived emotional facial expression onto observer’s own motor representations of producing that expression[16-18]. The simple sensory input of observing another’s action leads to an activation of an internal motor representation in the observer due to the similarity of the perceived action and the motor representation used to control action execution[19,20]. The relation between the motor cortex activation and the selective excitability of the muscles involved in performing an action has been regarded as supportive of this view[21]. The re-enactment of the observed expression could, in turn, even lead to the alteration of the observer’s own affective state through muscular feedback[15,22]. Indeed, numerous studies have shown that adults and older children rapidly mimic the facial expressions displayed by the people with whom they interact[23-25]. However, several findings are difficult to integrate with this perception-action matching proposal. SFRs which seem to match the observed emotions have also been recorded in response to emotional cues other than faces (i.e., body postures, vocal expressions, arousing pictures[26-30], thus in the absence of the corresponding motor model which is important for a simple perception-action matching account. Moreover, observing others’ facial expressions does not always elicit matching SFRs in the observer. For example, observing others’ angry faces elicits SFRs specific for fear rather than anger[25,31,32]. Angry individuals represent potential sources of threat[33,34], and usually elicit fear in others, both at subjective and psychophysiological level[35,36]. Only when angry individuals are perceived as physically weaker and threatening one’s social status, their facial displays of anger elicit similar SFRs in the observer[26,37]. Situations of competition were also shown to trigger facial responses which are incongruent with the observed emotional expressions. Instead of showing positive emotional facial expressions, adults respond with negative displays to their competitors’ pleasure[38,39]. In all these examples, the facial responses converge with the meaning and the informative value for the observer of the emotional signals received from others, rather than their motor characteristics. Studies have also shown that posing a certain emotional expression can alter one’s subjective emotional experience[40-43]. However, the causal link between emotional facial mimicry and changes in affective state lacks definitive evidence[44]. To account for these additional findings, it has been proposed that the SFRs which converge with the displays of affect observed in others involve emotion communicative processes[44-47]. At the heart of this emotion-communicative proposal is the idea that the evaluation of the information provided by the emotional cues for self is critical and varies as function of stimulus features and social context. The evaluation of the emotional information can occur at different levels, from relevance detection and coding the negative and positive reward value of the stimuli, to fast or more elaborate cognitive appraisal[48]. At the neural level, the evaluation of the emotional cues involves a circuitry consistent of both subcortical and cortical structures[48-51], amongst which the amygdala, the brainstem, and the orbitofrontal cortex (OFC) have been extensively investigated (see Koelsch et al.[48] for a recent review). For example, the amygdala plays a role in the fast detection and evaluation of threat[52-55], as well as in the processing of happy events[56]. The amygdala shows connectivity and co-activation with the motor and pre-motor cortical structures involved in preparation for action[57-60], suggesting that the early evaluation of emotional cues informs the behavioural responses during social interactions[45]. The shared motor representations comprising components of the perceived action and associated predicted somatosensory consequences are also considered to be active during the perception of emotional displays. However, the attributed role has more to do with the anticipation of others’ behaviour and intentions[44,61,62]. Components of the neural network underlying these processes are also thought to play a role in implementing the appropriate motor responses afforded by the specific social situation[62]. Recent neuroimaging investigations have shown that although the threat evaluation processes related to the amygdala slightly precede those involved in generating shared representations, these seem to interact and be integrated as soon as 200 ms after stimulus onset[63]. Given the role of the amygdala in evaluating a range of emotional events, a similar sequence of operations may also be encountered for positive emotions or for other brain structures with evaluative properties (e.g., the OFC) which are functionally connected with the motor cortex[48,51,56]. In order to understand the factors that influence facial reactions, it is important to investigate the development of the infant SFRs to others’ emotional facial expressions. Recently it was shown that 5-months-old infants selectively respond with increased activation of the zygomaticus major to audio-visual recordings of adults smiling and with increased activation of the corrugator supercilli to audio-visual recordings of adults crying. This selective muscle activation was not reported for unimodal presentations of adult expressions of cry and laughter (i.e., voice-only, face-only)[12]. Nonetheless, the absence of angry expressions and of contrasts between different negative emotional expressions, together with the lack of a truly developmental perspective given that only one age group was tested, highly limit the conclusions that can be drawn based on these findings. In the current study we employed an EMG paradigm which contrasts the responses towards three dynamic facial expressions of emotion (i.e., happiness, anger, and fear) in three facial muscles that have been found to be selectively activated in these facial displays (i.e., zygomaticus major for smiling during happiness, corrugator supercilli for frowning in anger, and frontalis for forehead raising in anger displays). The study was conducted with both 4- and 7-months-old infants. The choice of these age groups was motivated by the evidence suggesting that they represent important hallmarks in the development of the ability to process emotional information from faces[64]. Although even very young infants are able to discriminate between different facial expressions of emotions[65-67], it seems that only beginning with the age of 7-months they rely on adults’ specific emotional expressions to guide their behaviour towards the stimuli in the environment[64,68,69]. For example, it is around this age that infants begin to perceive fearful facial expressions as specific cues for threat[64,68]. If SFRs were predominantly a case of automatic perception-action matching, one would expect stronger activation in the muscle mainly involved in this expression (zygomaticus major for happy faces, corrugator supercilii for angry faces, and frontalis for fearful faces) relative to the other facial muscles. Cases where SFRs do not match facial expressions would support the view that additional mechanism to the direct mirror matching are responsible for SFRs, such as evaluative-communicative processes. From this perspective, emotion congruent SFRs are expected to occur at the age when infants are able to process the informative value of the perceived expression. In light of evidence suggesting that only towards the age of 7-months infants are more likely to process the informative value of certain emotional facial expressions, we anticipate SFRs congruent with the observed ones in 7- rather than 4-months-old infants. The comparisons across multiple emotions and multiple facial muscles at two developmental periods will allow us to draw conclusions with regard to the specificity and selectivity of the infant emotional SFRs.

Results

Mean amplitude values expressed as z-scores were analysed using a mixed ANOVA with Muscle (frontalis, corrugator supercilli, zygomaticus major), Emotion (happy, anger, fear), and Time window (Time 1, Time 2) as within factors and Age group (4-months-old vs. 7-months-old) as a between factor. All statistical tests were conducted at the 0.05 level of significance (two-tailed), with Bonferroni correction for post-hoc comparisons. The results show significant interactions between Time window × Age group (F(1,49) = 5.466, p = 0.024,  = 0.100), Emotion × Muscle × Age group (F(4,196) = 3.276, p = 0.013,  = 0.063), as well as Emotion × Muscle × Time window × Age group (F(4,196) = 2.749, p = 0.029,  = 0.053). No other significant main effects or interactions were observed (p > 0.052). Furthermore, to explore the Muscle × Emotion × Age Group × Time window interaction, we proceed to perform a 3 (Muscle: frontalis, corrugator or zygomaticus) × 3 (Emotion: happy, anger or fear) × 2 (Time window: Time 1, Time 2) repeated measures ANOVAs for each age group. Also, since we transformed facial reactions to z-scores, we were able to analyse whether the reactions to each emotion differed between muscles.

4-months-old infants

For the 4-months-old group an ANOVA with the factors Emotion, Muscle, and Time window revealed a significant interaction Emotion × Muscle, F(4,104) = 3.275, p = 0.014,  = 0.112 (Fig. 1). The post-hoc pairwise comparisons did not result in any significant differences in the muscle activation between emotions (p > 0.261), nor any differences in activation between muscles within emotions (p > 0.054). No other main effects or interactions were observed (p > 0.088). Thus we found no evidence of SFRs in the younger age group.
Figure 1

Means (and 95% confidence interval) of facial reactions towards the stimuli during Time 2 (1000–3000 ms from onset) for different muscles (expressed as z-scores).

Means (and 95% confidence interval) of facial reactions towards the stimuli during Time 2 (1000–3000 ms from onset) for different muscles (expressed as z-scores).

7-months-old infants

For the 7-month olds, the results show a significant interaction between Emotion, Muscle, and Time window, F(4,92) = 3.451; p = 0.011;  = 0.130. No other main effects or interactions were observed (p > 0.052). This indicated that 7-months olds showed differential facial responses towards the emotional faces which are dependent on time. We conducted post-hoc pairwise comparisons in order to compare the effect of different emotions on each muscle. For the 0 to 1000 ms time window, no significant differences between emotions were found for any of the muscles (p > 0.213). For the 1000 to 3000 ms time window, the corrugator supercillii showed significantly stronger reactions towards angry faces (M = 0.112, SE = 0.042) than happy faces (M = −0.056, SE = 0.030), p = 0.042. There were no significant differences between angry and fearful faces (M = −0.026, SE = 0.032), p = 0.167, or between happy and fearful faces, p > 0.900. For the frontalis, we found significantly stronger activation for fearful (M = 0.057, SE = 0.026) than for happy faces (M = −0.098, SE = 0.039), p = 0.023. No significant differences were found between fearful and angry faces (M = 0.047, SE = 0.044), p > 0.900, or between angry and happy faces, p = 0.213. For the zygomaticus, no significant differences emerged between the emotion categories (all p-values > 0.074; Fig. 1). For the 0 to 1000 ms time window, no significant differences in activation between muscles were found for any emotional facial expression (p > 0.849). For the 1000 to 3000 ms time interval, happy facial expressions elicited higher zygomaticus major activation (M = 0.084, SE = 0.055) compared to the corrugator supercilii (M = −0.056, SE = 0.030), p = 0.036, and the frontalis (M = −0.098, SE = 0.039), p = 0.018. There was no significant difference in reaction towards happy faces between corrugator and frontalis, p = 0.783. For fearful faces, the frontalis (M = 0.057, SE = 0.026) showed a significantly higher activation than the zygomaticus (M = −0.096, SE = 0.038), p = 0.009. There was no significant difference for fearful faces between frontalis and corrugator supercilii (M = −0.026, SE = 0.032), p = 0.114, and no significant difference between corrugator supercilii and zygomaticus major, p = 0.316. For angry faces, no significant differences emerged between the muscles (all p-values > 0.849; Fig. 1).

Discussion

Our aim was to understand the ontogeny of infants’ facial responsivity to others’ emotions and how this relates to the current theoretical models regarding the role of perception-action matching mechanisms and affect processes. We therefore presented 4- and 7-months-old infants with dynamic facial expressions of happiness, fear, and anger, while we used EMG to measure the activation of the muscles specific for expressing these emotions (i.e., zygomaticus major, frontalis, and corrugator supercilli, respectively). The results show that infants’ SFRs to dynamic emotional facial expressions undergo significant developmental changes towards the age of 7-months. The 4-months-old infants in our study did not manifest selective activation of the recorded facial muscles in response to dynamic facial cues of emotions. In fact, as Fig. 1 shows, very little facial responsivity was present for this age group. These findings are in line with previous EMG studies which show that 5-months-old infants do not match their SFRs with dynamic facial expressions of cry and laughter without additional emotion-relevant auditory cues[12], as well as a series of behavioural studies which reported a lack of selective emotional facial responsivity for 2–3-months-old infants and newborns[10,70]. Our study shows for the first time that dynamic emotional facial expressions elicit selective SFRs in 7-month-old infants. Importantly, this pattern of responsivity was not generalizable across all emotional expressions. The comparisons of muscle activation between and within each emotion show that observing dynamic facial expressions of happiness leads to increased activation of the muscle specific for expressing this emotion (i.e., zygomaticus major) and decreased activation of the muscle involved in expressing fear (i.e., frontalis) and anger (i.e., corrugator supercilli). A similar pattern of selective SFRs was also recorded for fearful faces, with an increased activation of the frontalis and decreased activation of the muscle specific for expressing happiness (i.e., zygomaticus major). In contrast, the perception of angry faces tended to lead to a more non-differentiated pattern of facial responsivity. While the muscle specific for expressing anger, corrugator supercilli, did record an increased activation in response to angry faces compared to the happy ones, this was not associated with a decrease in the activation of the muscle specific for smiling (i.e., the zygomaticus major) nor the muscle specific for fear (i.e., the frontalis). Similar partial selectivity of the behaviourally coded facial responsivity has been previously reported in studies with 2- to 3-months-old and 6-months-old infants, in which responses to more than two emotional expressions during ecological mother-infant interactions were contrasted[71-73]. Amongst the most prominent theoretical proposals for the neurocognitive mechanisms underlying the SFRs to others’ emotions are those attributing a primary role to perception-action matching mechanisms[16,18,22]. The fact that 7-months-old infants do not respond to all emotional expressions included in this study with matching SFRs in a selective manner suggests that these are less likely to be simple re-enactments of the observed expressions based on perception-action matching mechanisms. Another possibility might be that infants are only capable of showing matching SFRs after sufficient familiarity with certain expressions through prior exposure. However, it seems less likely that these results are due to differences in exposure to angry facial expressions. From around the age of 2-months, infants are exposed to parents’ facial expressions of anger. Although these are not as frequent as facial expressions of happiness[74], they are probably as frequent as those of fear[75], for which infants show congruent SFRs. Additionally, our findings are not likely to be due to an inability to perceptually discriminate or display the expressions tested. In particular, at this age infants have the ability to perceptually discriminate angry faces from various other emotional facial expressions[64,76], as well as the ability to display the facial movements specific for anger, happiness and fearfulness[71,72,77-79]. Behavioural and neuroimaging studies have shown that the more elaborate representations of emotional expressions and their communicative value develop in infants after the age of 5-months, in an emotion dependent fashion[64,76]. For example, 6–7-months-old but not younger infants show specific sensitivity to fearful faces as cues for threat and manifest increased attention towards objects that were looked at by fearful faces[64,68]. This ability consolidates in the next months[80] and becomes more obvious in how infants interact with their environment around the age of 12-months[69,81,82]. Although emotional expressions of anger are also relevant cues for threat, infants do not seem sensitive to their specific informative value until closer to their first birthday[83,84]. The insufficiently developed ability of 7-month-old infants to evaluate the specific informative value of angry facial expressions may partially explain their lack of selective SFRs for this expression. The immature ability of the 4-months-olds to process a variety of facial expressions may also be partially responsible for the absence of selective SFRs across all expressions included in this study. Taken together, the age differences and pattern of selective muscle activation appear to be consistent with proposals that see SFRs not as pure motor mimicry, but also see the influence of communicative processes involving the evaluation of the emotional cues[11,26,44,45,72]. This interpretation does not necessarily mean that instances of emotionally convergent SFRs may only be recorded in infants closer to the age of 7-months, but rather that they may be limited to those situations where infants are able to extract salient information from the perceived emotional cues. Previous behavioural studies which used more ecological adult-infant interaction paradigms showed that infants as young as 2–3-months manifest facial responses which tend to converge emotionally with the observed ones. However, these responses are specific to situations involving interactions between infants and their mothers, with whom they have had extensive experience in social exchanges[10,71-73,85]. In this case, infants’ facial responses may reflect the appraisal of the perceived emotional cues with respect to the mother’s immediate future actions which in the past elicited specific emotional responses. For example, caregivers’ smiling faces are typically associated with pleasant social engagement, such as play and caring actions known to induce positive affect in the infant. In contrast, the display of negative emotional expressions is more likely to be followed by a lack of social interaction which can be distressing for the infant[71,72,76]. This explanation would also account for those situations where the perception and the evaluation of others’ emotions are facilitated by the presence of multiple cues[26,27,86-89] or the quality of the emotional cues (e.g., static versus dynamic expressions[86,90-92]). The fact that 5-month-old infants respond with emotion convergent SFRs to audio-visual expressions of laughter and crying but not to the unimodal presentations (i.e., face-only, voice-only) of these emotional displays[12] may reflect such facilitating effect[93-95]. Although the current findings together with those previously reported[10-12] are informative about the emergence of the emotion congruent SFRs in infancy and suggestive with regards to the complexity of the underlying neurocognitive mechanisms, further research is needed in order to draw firmer conclusions in this respect. For example, although the current study shows that facial EMG paradigms can be successfully used with infants of different ages, it does not allow establishing whether the observed facial responses are related to changes in autonomic arousal. Emotional expressions displayed by both adults and peers were found to elicit autonomic arousal indicative of emotional responsivity in infants. In particular, changes in skin conductance and pupil diameter have been reported in response to expressions of happiness, fear, anger, and general distress in infants starting with the age of 4-months[96-99]. Changes in autonomic arousal also seem to be significantly related to infants’ facial responses in emotion elicitation situations[100-103]. Thus, concurrent facial EMG and measures of psychophysiological arousal would be particularly valuable for understanding how affect related processes contribute to the emergence of the emotionally convergent SFRs during infancy and childhood. Such knowledge is also directly relevant for studying the ontogeny of affect sharing and empathy[104,105]. Extracting, processing, and responding to the emotional information presented by human faces relies on complex neural networks involving both sub-cortical and cortical structures, including those that are part of the emotion-related brain circuits (e.g., the amygdala and the orbito-frontal cortex[49,50,106]) and those functionally linked with motor preparation for action and estimating others’ immediate intent for action[45,57,59,60,62,107-109]. Although the emotion-related brain structures are already functional at birth, and the connections with the other related cortical and subcortical areas established, these brain structures continue to mature and their pattern of connectivity refines over the course of postnatal development[75]. It is thus possible that the SFRs of the 7-months-old infants to happy and fearful facial expressions reflect, at least partially, these developmental changes in the underlying neural network[75,110]. Natural variations in the familiarity with different social contexts, as well as in the maturation of the relevant brain networks which are specific to the first year of life can thus provide unique opportunities for characterizing processes that would otherwise be impossible to capture in the fully mature adults[111,112]. Different experimental approaches could be adopted for further investigations into the neurocognitive mechanisms underlying emotion congruent SFRs in infancy. For example, concurrent recordings of facial EMG and EEG based measures of cortical activation would be particularly informative in understanding how neural development contributes to the emergence of emotionally convergent SFRs in infancy[44,62,112]. These paradigms have the potential to clarify the extent to which shared motor representations comprising components of the perceived action and associated somatosensory consequences are involved in generating emotion congruent SFRs in infants, alongside emotion evaluation and reactivity processes. Specifying the dynamic of the facial muscle activation may also be relevant in this respect, potentially reflecting the chronology of different processes. In the present study we have shown that the selective facial muscle activation specific for emotion congruent SFRs is overall recorded between 1000 and 3000ms after stimulus onset. This timing is similar to that reported in previous studies with young children[32]. Nevertheless, more subtle latency differences between emotions, and between muscles within emotion categories, may be present[113]. The stimuli used in the current study were not matched for the precise timing of facial actions, therefore not allowing a more refined time sensitive analysis. Artificially developed stimuli, such as morphed faces, or static facial expressions would be particularly suitable in this respect. Being able to detect and respond to others’ emotions is essential to our social lives. For the past decades, a large number of studies have shown that adults tend to respond with rapid facial responses which converge emotionally with the emotions they perceive in others[86]. Although much more limited, evidence also emerged in recent years to show that similar patterns of facial responsivity can be reported during childhood[31,114-116]. Despite being a well-documented phenomenon in adulthood, debates regarding its early ontogeny and the underlying neurocognitive mechanisms remain open[10,12,27,44]. Our study shows that spontaneous facial responses which converge emotionally with the facial expressions observed in others can be recorded in 7- but not in 4-months-old infants. The pattern of infant emotional SFRs suggests that they may rely on complex neurocognitive mechanisms[44], which undergo important developments at least until the second half of the first year of life. The factors contributing to the development of infants’ emotional SFRs remain to be established, with direct relevance for understanding the emergence of related complex social abilities like communication and empathy[105,117].

Methods

Participants

Twenty seven 4-month old infants (11 females, Mage = 135.11 days, SD = 10.08 days) and 24 7-month old infants (14 females, Mage = 226.17 days, SD = 9.90 days) were included in the final analysis. An additional 5 4-months-old and 8 7-months-old infants were tested but not included in the final sample due to technical issues (n = 4) or inattentiveness resulting in less than 5 good trials per condition (n = 10). All participants were recruited from a small urban area in North West England. Informed consent was obtained from all parents prior to the beginning of the procedure. The procedure was carried out in accordance with the ethical standards of the Declaration of Helsinki (BMJ 1991; 302:1194). Ethical approval was granted by the Lancaster University Ethics Committee. Parents were reimbursed for their travel expenses (£10), while infants received a token for their participation.

Stimuli

Fifteen grey-scale dynamic female human faces displaying happiness (n = 5), anger (n = 5), and fear (n = 5) were taken from the Cohn-Kanade Expression database[118], which has become one of the most widely used stimuli for studies of facial expression analysis[118,119]. One of the main strengths of this dataset is that all facial expressions have been fully FACS coded[119]. The chosen faces were selected for their emotional valence. The selection criteria for the stimuli was that all happy facial expressions included corners of the mouth raised in a smile, all anger expressions included furrowed brows, and all fear expressions included raised eyebrows. For all stimuli, the transition between neutral and emotional expression occurred between 0 and 1000 ms, while the peak expressivity was reached between 1000 and 3000 ms. The exact timing of the facial movements, specific for each emotion expression, varied within and between stimulus categories. Face images were cropped using an oval frame that allowed facial features to be visible but excluded hair, ears, and any other paraphernalia.

Procedure

Participants were tested individually in a quiet and dimly lit room. Before placing the electrodes, the skin was cleaned with an alcohol-free wipe. The electrodes were attached by one of the experimenters, while the second blew soap bubbles or manipulated a rattle toy in order to maintain the participant calm and distract him/her, as needed. Once the facial electrodes were placed, the participants sat during the entire procedure on their mothers’ lap approximately 70 cm away from a 24-inch monitor. Parents were instructed to hold their infants’ hand as still as possible to prevent infants from pulling the facial electrodes, not to speak to them, and not to point towards the screen during the entire stimuli presentation. Each trial started with a central fixation cross for 1000 ms, during which baseline muscle activity levels were established. Following the fixation cross, a black screen displaying the emotional facial expression appeared for 3000 ms, followed by a blank screen (see Fig. 2). Between trials, a dynamic non-social attention grabber was played whenever needed in order to maintain the participants’ attention to the stimuli in case they showed signs of becoming distracted. The option of having experimental controlled presentation of the attention grabber rather than an automatic presentation after each trial is common to infant psychophysiology paradigms requiring the presentation of many trials[120] and capitalizes on the infants’ natural bouts of attention. The procedure continued for as long as infants paid attention to the stimuli. On average, participants completed 55.12 trials (Happy faces: M = 18.35 trials, Min = 10, Max = 30; Angry faces: M = 18.12 trials, Min = 10, Max = 30; Fearful faces: M = 18.65 trials, Min = 11, Max = 30). The entire procedure was video recorded in order to establish whether the infants had watched the faces in each trial and to facilitate artifact detection during the data analysis. The complete experimental session took approximately 10 min.
Figure 2

Example of a trial structure and stimuli used in the study. After a 1000 ms central fixation cross, the participants were presented for 3000 ms with the dynamic facial expression of either anger, happiness or fear displayed by a female adult. The emotional stimulus was followed up by a blank screen. The non-social attention grabber was presented whenever it was required to recapture participants’ attention to the screen. (The face picture included in the figure is for illustration purposes only and not part of the stimuli used in the study).

Example of a trial structure and stimuli used in the study. After a 1000 ms central fixation cross, the participants were presented for 3000 ms with the dynamic facial expression of either anger, happiness or fear displayed by a female adult. The emotional stimulus was followed up by a blank screen. The non-social attention grabber was presented whenever it was required to recapture participants’ attention to the screen. (The face picture included in the figure is for illustration purposes only and not part of the stimuli used in the study).

EMG data acquisition and analysis

Electromyography was used to record the levels of muscle activity over the zygomaticus major (raises the cheek), the medial frontalis (raises the brow), and the corrugator supercilli (knits brow). This method was extensively used to record adults’ facial responses to others’ emotions[39]. Although the internal consistency of the recorded EMG signal in these studies tends to be low, the test-retest reliability is good[121]. Recent studies show that facial EMG is a method suitable to be used with young children and infants[31,32,122]. In the present study, a BIOPAC MP30 continuously recorded the EMG signal from the selected muscles using bipolar montages. Disposable surface adhesive 4 mm Ag-AgCl EMG electrodes (Unimed) were placed on the infants’ face at locations corresponding to each muscle according to the guidelines by Fridlund & Cacioppo[123] and as previously reported in facial EMG studies with infants[122,124] and toddlers[32]. Electrodes were positioned on the left side of the face to obtain maximal reactions[123]. The reference electrode was positioned just below the hairline approximately 3 cm above the nasion. The EMG signal was recorded at a sampling rate of 1 kHz filtered offline (low pass: 150 Hz; high pass: 30 Hz), and rectified. Rectified data was averaged in 200 ms time bins which where z-transformed for each muscle and participant individually. This is a standard procedure in facial EMG studies allowing for a comparison between participants and muscles (see Supplementary Information – Fig. 2 - for a depiction of the EMG signal before standardization). Participants’ looking time toward the screen was coded offline in order to inform whether they attended the stimuli. This is common procedure in electrophysiology research with preverbal children (e.g., Lloyd-Fox et al.[125]). Trials with a looking time of less than 70% of the stimulus duration, as well as trials with excessive movement or noise artifacts were excluded. Only children with minimum five trials per condition were included in the final statistical analyses. This criterion was informed by previous studies with infants[12], children[31,114], and adults[24,126-128]. Across participants, the mean number of trials contributing to the final statistical analyses was 33.10 (Happy faces: M = 11.04 trials, Min = 5, Max = 18; Angry faces: M = 10.18, Min = 5, Max = 17; Fearful faces: M = 11.88, Min = 5, Max = 19). Previous studies with children using a similar paradigm suggest that facial reactions towards emotional expressions start to show between 500 and 1000 ms for static facial stimuli that are already fully developed in their expressivity[31,32,115], which is also consistent with adult studies[23,25,128]. As the dynamic stimuli in this study gradually developed over the first 1000 ms and remained at peak between the 1000–3000 ms, we averaged for each trial both the first onset phase (Time point 1) and the peak expression phase (Time point 2). Average activation was baseline-corrected by subtracting the 1000 ms interval immediately before stimulus onset, and the mean across trials of the same emotion was calculated.
  94 in total

Review 1.  The amygdala: vigilance and emotion.

Authors:  M Davis; P J Whalen
Journal:  Mol Psychiatry       Date:  2001-01       Impact factor: 15.992

2.  Facial reactions to happy and angry facial expressions: evidence for right hemisphere dominance.

Authors:  U Dimberg; M Petterson
Journal:  Psychophysiology       Date:  2000-09       Impact factor: 4.016

3.  Developmental changes in infants' processing of happy and angry facial expressions: a neurobehavioral study.

Authors:  Tobias Grossmann; Tricia Striano; Angela D Friederici
Journal:  Brain Cogn       Date:  2006-12-13       Impact factor: 2.310

4.  Facial-expression and gaze-selective responses in the monkey amygdala.

Authors:  Kari L Hoffman; Katalin M Gothard; Michael C Schmid; Nikos K Logothetis
Journal:  Curr Biol       Date:  2007-04-05       Impact factor: 10.834

5.  Fearful faces selectively increase corticospinal motor tract excitability: a transcranial magnetic stimulation study.

Authors:  Dennis J L G Schutter; Dennis Hofman; Jack Van Honk
Journal:  Psychophysiology       Date:  2008-01-24       Impact factor: 4.016

6.  The link between facial feedback and neural activity within central circuitries of emotion--new insights from botulinum toxin-induced denervation of frown muscles.

Authors:  Andreas Hennenlotter; Christian Dresel; Florian Castrop; Andres O Ceballos-Baumann; Andres O Ceballos Baumann; Afra M Wohlschläger; Bernhard Haslinger
Journal:  Cereb Cortex       Date:  2008-06-17       Impact factor: 5.357

7.  Emotion and motor preparation: A transcranial magnetic stimulation study of corticospinal motor tract excitability.

Authors:  Stephen A Coombes; Christophe Tandonnet; Hakuei Fujiyama; Christopher M Janelle; James H Cauraugh; Jeffery J Summers
Journal:  Cogn Affect Behav Neurosci       Date:  2009-12       Impact factor: 3.282

8.  The relation of ANS and HPA activation to infant anger and sadness response to goal blockage.

Authors:  Michael Lewis; Douglas S Ramsay; Margaret W Sullivan
Journal:  Dev Psychobiol       Date:  2006-07       Impact factor: 3.038

9.  Facial reactions to facial expressions.

Authors:  U Dimberg
Journal:  Psychophysiology       Date:  1982-11       Impact factor: 4.016

10.  A Darwinian legacy to understanding human infancy: emotional expressions as behavior regulators.

Authors:  Joseph J Campos; Seinenu Thein; Daniela Owen
Journal:  Ann N Y Acad Sci       Date:  2003-12       Impact factor: 5.691

View more
  8 in total

1.  Maternal major depression and synchrony of facial affect during mother-child interactions.

Authors:  Anastacia Y Kudinova; Mary L Woody; Kiera M James; Katie L Burkhouse; Cope Feurer; Claire E Foster; Brandon E Gibb
Journal:  J Abnorm Psychol       Date:  2019-05

Review 2.  The neurodevelopment of social preferences in early childhood.

Authors:  Jean Decety; Nikolaus Steinbeis; Jason M Cowell
Journal:  Curr Opin Neurobiol       Date:  2021-01-05       Impact factor: 7.070

3.  Observing third-party ostracism enhances facial mimicry in 30-month-olds.

Authors:  Carina de Klerk; Hannah Albiston; Chiara Bulgarelli; Victoria Southgate; Antonia Hamilton
Journal:  J Exp Child Psychol       Date:  2020-04-27

4.  Selective facial mimicry of native over foreign speakers in preverbal infants.

Authors:  Carina C J M de Klerk; Chiara Bulgarelli; Antonia Hamilton; Victoria Southgate
Journal:  J Exp Child Psychol       Date:  2019-03-08

5.  Infant Emotional Mimicry of Strangers: Associations with Parent Emotional Mimicry, Parent-Infant Mutual Attention, and Parent Dispositional Affective Empathy.

Authors:  Eliala A Salvadori; Cristina Colonnesi; Heleen S Vonk; Frans J Oort; Evin Aktar
Journal:  Int J Environ Res Public Health       Date:  2021-01-14       Impact factor: 3.390

Review 6.  Collective Rhythm as an Emergent Property During Human Social Coordination.

Authors:  Arodi Farrera; Gabriel Ramos-Fernández
Journal:  Front Psychol       Date:  2022-02-10

7.  A neural marker of rapid discrimination of facial expression in 3.5- and 7-month-old infants.

Authors:  Fanny Poncet; Arnaud Leleu; Diane Rekow; Fabrice Damon; Milena P Dzhelyova; Benoist Schaal; Karine Durand; Laurence Faivre; Bruno Rossion; Jean-Yves Baudouin
Journal:  Front Neurosci       Date:  2022-08-18       Impact factor: 5.152

Review 8.  Expanding Simulation Models of Emotional Understanding: The Case for Different Modalities, Body-State Simulation Prominence, and Developmental Trajectories.

Authors:  Paddy Ross; Anthony P Atkinson
Journal:  Front Psychol       Date:  2020-03-03
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.