| Literature DB >> 33841202 |
Javier Villanueva-Valle1,2, José-Luis Díaz3, Said Jiménez1,2, Andrés Rodríguez-Delgado4, Iván Arango de Montis4, Areli León-Bernal1,2, Edgar Miranda-Terres4, Jairo Muñoz-Delgado1,2.
Abstract
Videotape recordings obtained during an initial and conventional psychiatric interview were used to assess possible emotional differences in facial expressions and acoustic parameters of the voice between Borderline Personality Disorder (BPD) female patients and matched controls. The incidence of seven basic emotion expressions, emotional valence, heart rate, and vocal frequency (f0), and intensity (dB) of the discourse adjectives and interjections were determined through the application of computational software to the visual (FaceReader) and sound (PRAAT) tracks of the videotape recordings. The extensive data obtained were analyzed by three statistical strategies: linear multilevel modeling, correlation matrices, and exploratory network analysis. In comparison with healthy controls, BPD patients express a third less sadness and show a higher number of positive correlations (14 vs. 8) and a cluster of related nodes among the prosodic parameters and the facial expressions of anger, disgust, and contempt. In contrast, control subjects showed negative or null correlations between such facial expressions and prosodic parameters. It seems feasible that BPD patients restrain the facial expression of specific emotions in an attempt to achieve social acceptance. Moreover, the confluence of prosodic and facial expressions of negative emotions reflects a sympathetic activation which is opposed to the social engagement system. Such BPD imbalance reflects an emotional alteration and a dysfunctional behavioral strategy that may constitute a useful biobehavioral indicator of the severity and clinical course of the disorder. This face/voice/heart rate emotional expression assessment (EMEX) may be used in the search for reliable biobehavioral correlates of other psychopathological conditions.Entities:
Keywords: FaceReader; PRAAT; emotional conflict; exploratory network analysis; multilevel models; prosody; social engagement system; speech characteristics
Year: 2021 PMID: 33841202 PMCID: PMC8024539 DOI: 10.3389/fpsyt.2021.628397
Source DB: PubMed Journal: Front Psychiatry ISSN: 1664-0640 Impact factor: 4.157
Facial expression of emotion frequency.
| Neutral | 0.586 | 0.197 | 0.652 | 0.203 | −0.073 | 0.151 | 0.67 | 0.68 |
| Happy | 0.053 | 0.129 | 0.075 | 0.168 | −0.004 | 0.075 | 1.72 | 0.34 |
| Sad | 0.13 | 0.228 | 0.051 | 0.103 | −0.119 | −0.04 | −3.92 | |
| Angry | 0.019 | 0.04 | 0.011 | 0.024 | −0.016 | 0 | −1.83 | 0.34 |
| Surprised | 0.106 | 0.18 | 0.067 | 0.136 | −0.089 | 0.021 | −1.19 | 0.49 |
| Scared | 0.034 | 0.062 | 0.026 | 0.055 | −0.019 | 0.002 | −1.64 | 0.34 |
| Disgusted | 0.012 | 0.028 | 0.006 | 0.015 | −0.014 | 0.002 | −1.39 | 0.49 |
| Contempt | 0.012 | 0.035 | 0.015 | 0.04 | −0.006 | 0.009 | 0.44 | 0.78 |
| Valence | −0.114 | 0.262 | −0.002 | 0.214 | 0.066 | 0.187 | 4.05 | |
| Arousal | 0.354 | 0.184 | 0.348 | 0.163 | −0.06 | 0.068 | 0.12 | 0.91 |
| Heart rate | 68 | 8.759 | 70 | 9.332 | −3.12 | 9.92 | 1.01 | 0.53 |
p < 0.05.
Acoustic parameters of voice frequency.
| Adjectives ( | 177 | 67.3 | 197 | 73.6 | −26.34 | 51.99 | 0.72 | 0.68 |
| Interjections ( | 172 | 66.5 | 177 | 90 | −17.17 | 20.8 | 0.19 | 0.9 |
| Adjectives (dB) | 61.5 | 5.91 | 63.6 | 7.39 | −2.75 | 8.66 | 1.12 | 0.49 |
| Interjections (dB) | 61.8 | 5.93 | 61.4 | 7.4 | −4.71 | 2.68 | −0.6 | 0.68 |
f0 = Fundamental frequency in Hertz.
dB = Intensity in decibels.
Figure 1Correlation matrices established among emotional variables (Neutral, Happy, Sad, Angry, Surprised, Disgusted, Valence, Arousal) and acoustic parameters (f0, dB) of controls (A) and patients (B). The correlation index on each cell is displayed in a gradient of colors where red represents a positive correlation (0 > r > 1); blue, a negative one (−1 < r <0); and white the absence of revelent correlation.
Figure 2Controls (A) and patients (B) force-directed layout networks among facial expressions, acoustic parameters (f0–dB), valence, heart rate, and arousal. The nodes represent recorded variables; their size and color intensity reflect the number of correlations: bigger and darker nodes represent a greater number of relations. Edges connecting pairs of nodes reflect the strength of their correlation. Color intensity (red for positive and blue for negative) reflects the correlation strength.