| Literature DB >> 35239654 |
T Tim A Höfling1, Georg W Alpers1, Björn Büdenbender1, Ulrich Föhl2, Antje B M Gerdes1.
Abstract
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.Entities:
Mesh:
Year: 2022 PMID: 35239654 PMCID: PMC8893617 DOI: 10.1371/journal.pone.0263863
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1FaceReader (FR) emotion scores.
Mean FR emotion scores separately for trained actors and untrained participants in arbitrary units [AU]. Note. Panel titles refer to the intended emotional facial expressions. The colored bars indicate the different emotion scores measured by the software. Error bars indicate 95% confidence intervals.
Mean differences of corresponding FaceReader emotion scores (FR) between data from untrained participants’ and trained actors’ emotional facial expressions in arbitrary units.
| Emotion Category | Untrained Participants | Trained Actors |
|
|
|
| Effect Interpretation |
|---|---|---|---|---|---|---|---|
| Joy | 79.02 (13.46) | 95.27 (6.91) | 8.90 | 102.18 | < .001 | 1.52 | Very Large |
| Anger | 56.22 (25.95) | 76.36 (26.31) | 4.53 | 136 | < .001 | 0.77 | Moderate |
| Surprise | 51.15 (22.17) | 88.19 (11.96) | 12.21 | 104.50 | < .001 | 2.08 | Huge |
| Sadness | 44.50 (17.38) | 83.01 (21.16) | 11.68 | 136 | < .001 | 1.99 | Very Large |
| Disgust | 40.55 (23.94) | 87.32 (15.09) | 13.73 | 114.69 | < .001 | 2.34 | Huge |
| Fear | 22.71 (19.20) | 74.25 (28.60) | 12.43 | 118.95 | < .001 | 2.12 | Huge |
Note. t = t-values, df = corrected degrees of freedom, p = p-values, d = Cohen’s d. M and SD represent mean and standard deviation. d ≥ 0.2 small; d ≥ 0.5 medium; d ≥ 0.8 large; d ≥ 1.2 very large; d ≥ 2.0 huge.
Performance metrics for the twelve multi-layer perceptrons to classify between neutral and emotional facial expressions separately for untrained participants and trained actors.
| Untrained Participants | Trained Actors | |||||||
|---|---|---|---|---|---|---|---|---|
| Emotion Category | Neurons | Accuracy | Kappa |
| Neurons | Accuracy | Kappa |
|
| Joy | 1 | 1.00 | 1.00 | 1.00 | 1 | .993 | .985 | .993 |
| Anger | 2 | .956 | .912 | .954 | 2 | .993 | .986 | .993 |
| Surprise | 1 | .972 | .944 | .970 | 1 | .971 | .825 | .972 |
| Sadness | 14 | .918 | .836 | .910 | 1 | .913 | .943 | .909 |
| Disgust | 18 | .978 | .956 | .976 | 1 | .993 | .986 | .994 |
| Fear | 2 | .970 | .940 | .968 | 2 | .972 | .944 | .974 |
Note. Performance of twelve multi-layer perceptrons (MLP) in the contrasted datasets (only trials of one target emotion and neutral trials). Neurons refer to the number of nodes in the single hidden-layer of the MLP and represents a hyperparameter of the model. Performance metrics (accuracy, kappa scores, F1) are averaged over all five folds.
Fig 2Variable importance of action units.
Note. Bars indicate Variable Importance (VI) Score of an Action Unit (AU) for the binary classification of an intended emotion against neutral facial expression separately for trained actors’ and untrained participants’ datasets. AU with VI score below 0.025 in both datasets are considered irrelevant for classification. Panels titles refer to the intended emotional facial expressions.
Fig 3Action unit profiles.
Mean action unit (AU) intensity trained actors and untrained participants measured by FaceReader in arbitrary units [AU]. Note. Panels titles refer to the intended emotional facial expressions. Error bars indicate 95% confidence intervals.
MANOVA for specific Action Unit (AU) activity and datasets (untrained participants’ and trained actors’ emotional facial expressions).
|
|
|
| ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Emotion Category |
|
|
|
|
|
|
|
|
|
|
|
|
| Joy | 2,133 | 15.52 | < .001 | .19 | 1,134 | 228.76 | < .001 | .63 | 2,133 | 31.93 | < .001 | .32 |
| Anger | 3,134 | 2.95 | .035 | .06 | 1,136 | 27.40 | < .001 | .17 | 3,134 | 20.86 | < .001 | .32 |
| Surprise | 4,133 | 9.58 | < .001 | .22 | 1,136 | 240.22 | < .001 | .64 | 4,133 | 1.91 | .112 | .05 |
| Sadness | 3,134 | 10.22 | < .001 | .19 | 1,136 | 77.85 | < .001 | .36 | 3,134 | 15.41 | < .001 | .26 |
| Disgust | 4,133 | 10.80 | < .001 | .25 | 1,136 | 154.61 | < .001 | .53 | 4,133 | 2.85 | .027 | .08 |
| Fear | 4,133 | 4.68 | < .001 | .12 | 1,136 | 199.38 | < .001 | .59 | 4,133 | 29.59 | < .001 | .47 |
Note. dfs = degrees of freedom, F = F-Values, p = p-values, ηp2 = partial eta squared. AU subsets: Joy (AU6, AU12, AU25), Anger (AU4, AU7, AU23, AU24), Surprise (AU1, AU2, AU5, AU25, AU26), Sadness (AU1, AU4, AU15, AU17), Disgust (AU4, AU7, AU9, AU10, AU25), Fear (AU1, AU2, AU4, AU5, AU25).
Mean differences of Action Unit (AU) activity between data from untrained participants’ and trained actors’ emotional facial expressions in arbitrary units.
| Emotion Category | AU | Untrained Participants ( | Trained Actors ( |
|
|
|
| Effect Inter-pretation |
|---|---|---|---|---|---|---|---|---|
| Joy | 06 | 41.90 (22.91) | 84.39 (15.49) | 12.70 | 119.77 | < .001 | 2.17 | Huge |
| 12 | 62.63 (13.36) | 88.15 (6.00) | 14.43 | 94.97 | < .001 | 2.46 | Huge | |
| 25 | 54.79 (21.05) | 87.31 (22.21) | 9.19 | 133.98 | < .001 | 1.50 | Very Large | |
| Anger | 04 | 47.46 (25.64) | 75.49 (25.90) | 6.39 | 135.99 | < .001 | 1.09 | Large |
| 07 | 32.26 (21.39) | 58.53 (25.10) | 6.62 | 132.67 | < .001 | 1.13 | Large | |
| 23 | 34.43 (21.47) | 50.85 (40.35) | 2.99 | 103.65 | < .001 | 0.51 | Moderate | |
| 24 | 35.47 (26.45) | 47.57 (42.51) | 2.01 | 113.79 | < .001 | 0.34 | Small | |
| Surprise | 01 | 24.10 (23.62) | 75.05 (26.48) | 11.93 | 136 | < .001 | 2.03 | Huge |
| 02 | 27.85 (23.42) | 76.31 (26.57) | 11.36 | 136 | < .001 | 1.93 | Very Large | |
| 05 | 23.08 (23.77) | 77.66 (26.83) | 12.65 | 136 | < .001 | 2.15 | Huge | |
| 25 | 40.18 (21.41) | 57.68 (29.93) | 3.95 | 123.15 | < .001 | 0.67 | Moderate | |
| 26 | 29.19 (19.91) | 63.37 (20.51) | 9.93 | 135.88 | < .001 | 1.69 | Very Large | |
| Sadness | 01 | 10.52 (16.64) | 59.40 (36.38) | 10.15 | 95.25 | < .001 | 1.73 | Very Large |
| 04 | 12.82 (19.29) | 60.92 (38.91) | 9.20 | 99.51 | < .001 | 1.57 | Very Large | |
| 15 | 37.68 (22.91) | 64.74 (34.66) | 5.41 | 117.88 | < .001 | 0.92 | Large | |
| 17 | 45.74 (25.65) | 62.80 (35.94) | 3.21 | 123.02 | .002 | 0.55 | Moderate | |
| Disgust | 04 | 30.38 (22.59) | 52.30 (33.90) | 4.47 | 118.44 | < .001 | 0.76 | Moderate |
| 07 | 29.73 (21.91) | 46.86 (27.75) | 4.02 | 136 | < .001 | 0.69 | Moderate | |
| 09 | 20.72 (19.18) | 70.41 (25.85) | 12.82 | 136 | < .001 | 2.18 | Huge | |
| 10 | 25.32 (22.04) | 68.48 (27.49) | 10.17 | 136 | < .001 | 1.73 | Very Large | |
| 25 | 28.04 (23.68) | 61.09 (38.92) | 6.03 | 112.28 | < .001 | 1.03 | Large | |
| Fear | 01 | 22.75 (23.26) | 67.77 (29.34) | 9.99 | 136 | < .001 | 1.70 | Very Large |
| 02 | 12.11 (17.42) | 47.65 (39.36) | 6.86 | 93.66 | < .001 | 1.17 | Large | |
| 04 | 10.98 (16.18) | 50.94 (36.71) | 8.28 | 93.46 | < .001 | 1.41 | Very Large | |
| 05 | 15.46 (20.37) | 63.25 (35.89) | 9.62 | 107.70 | < .001 | 1.64 | Very Large | |
| 25 | 20.51 (19.24) | 74.37 (24.60) | 14.33 | 128.52 | < .001 | 2.44 | Huge |
Note. t = t-values, df = corrected degrees of freedom, p = p-values, d = Cohen’s d. M and SD represent mean and standard deviation. d ≥ 0.2 small; d ≥ 0.5 medium; d ≥ 0.8 large; d ≥ 1.2 very large; d ≥ 2.0 huge.