| Literature DB >> 30453518 |
Marco Leo1, Pierluigi Carcagnì2, Cosimo Distante3, Paolo Spagnolo4, Pier Luigi Mazzeo5, Anna Chiara Rosato6, Serena Petrocchi7, Chiara Pellegrino8, Annalisa Levante9, Filomena De Lumè10, Flavia Lecciso11.
Abstract
In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system's outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children's abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.Entities:
Keywords: ASD diagnosis and assessment; geometrical and temporal regularization of facial action units; quantitative facial expression analysis
Mesh:
Year: 2018 PMID: 30453518 PMCID: PMC6263710 DOI: 10.3390/s18113993
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The algorithmic pipeline.
Figure 2The learned joint probabilities among AUs.
Figure 3An example of regularized estimates for AU6.
Action Units used for monitoring the ability in producing the foru basic facial expressions (H = Happiness; S = Sadness; F = Fear; and A = Anger).
| AU | Full Name | Example | Involved in |
|---|---|---|---|
| AU1 | Inner brow raiser |
| S-F |
| AU2 | Outer brow raiser |
| F |
| AU4 | Brow lowerer |
| S-F-A |
| AU5 | Upper lid raiser |
| F-A |
| AU6 | Cheek raiser |
| H |
| AU7 | Lid tightener |
| A |
| AU9 | Nose wrinkler |
| A |
| AU12 | Lip corner puller |
| H |
| AU15 | Lip corner depressor |
| S |
| AU17 | Chin raiser |
| A |
| AU20 | Lip stretched |
| F |
| AU23 | Lip tightener |
| A |
| AU25 | Lips part |
| A |
| AU26 | Jaw drop |
| F |
Figure 4(Top) Intensity of AU12 while lip corners were pulled up; and (Bottom) corresponding computed variations.
Measure computation of production ability for: H = Happiness; S = Sadness; F = Fear; and A = Anger. Time index has been omitted for a better table readability.
| Facial Expression | Production Scores |
|---|---|
| H |
|
|
| |
| S |
|
|
| |
| F |
|
|
| |
| A |
|
|
|
An overview of the scores obtained by the proposed pipeline on the subjects in the CK+ dataset.
|
|
|
| |
|---|---|---|---|
|
| 4 ( | 5 ( | 60 ( |
|
| 2 ( | 2 ( | 65 ( |
|
| 4 ( | 2 ( | 22 ( |
|
| 2 ( | 0 | 26 ( |
|
| 2 ( | 0 | 23 ( |
|
| 3 ( | 3 ( | 19 ( |
|
| 2 ( | 3 ( | 40 ( |
|
| 0 | 5 ( | 40 ( |
| overall | 19 ( | 20 ( | 124 ( |
Figure 5Two examples of fear execution in the CK+ dataset. Computational scores pointed out the expression performed by the Subject 54 (first row) is quantitatively worst than the one performed by the Subject 132 (second row).
Facial Expression Recognition performance on the CK+ dataset (N = Neutral; H = Happiness; S = Sadness; F = Fear; and A = Anger).
| N | H | S | F | A | |
|---|---|---|---|---|---|
| N |
| 1 | 1 | 0 | 0 |
| H | 0 |
| 1 | 0 | 0 |
| S | 4 | 2 |
| 2 | 1 |
| F | 0 | 0 | 2 |
| 2 |
| A | 5 | 2 | 2 | 3 |
|
|
|
|
|
|
|
|
Figure 6Annotations provided by a group of three expert professionals.
Figure 7Measures of facial expression production ability for ASD Child 2 separately plotted for upper (top) and lower (bottom) face parts.Vertical green lines indicate the time instant in which the child was asked to produce the fear facial expression.
Figure 8Intensity values computed for Child 2 for the action units involved in the production of the Fear expression and related to the upper face part (from left to right, top to bottom AU1, AU2, AU4 and AU5).
Figure 9Graphical representations of average production scores computed for each of the 17 ASD children. From top to bottom: Happiness, Sadness, Fear and Anger related values.Blue bars are related to upper face part, whereas red bars are related to lower face part.
Figure 102D visual representation of the data variability in the computed scores for the 17 ASD children.
Figure 11Plot into the (Happiness—lower face measure)- (Happiness—upper face measure) plane of the dispositions of measures for Happiness production in the 17 ASD children.