| Literature DB >> 31689906 |
Andrea E Kowallik1,2,3, Stefan R Schweinberger4,5,6,7,8.
Abstract
The prevalence of autism spectrum disorders (ASD) has increased strongly over the past decades, and so has the demand for adequate behavioral assessment and support for persons affected by ASD. Here we provide a review on original research that used sensor technology for an objective assessment of social behavior, either with the aim to assist the assessment of autism or with the aim to use this technology for intervention and support of people with autism. Considering rapid technological progress, we focus (1) on studies published within the last 10 years (2009-2019), (2) on contact- and irritation-free sensor technology that does not constrain natural movement and interaction, and (3) on sensory input from the face, the voice, or body movements. We conclude that sensor technology has already demonstrated its great potential for improving both behavioral assessment and interventions in autism spectrum disorders. We also discuss selected examples for recent theoretical questions related to the understanding of psychological changes and potentials in autism. In addition to its applied potential, we argue that sensor technology-when implemented by appropriate interdisciplinary teams-may even contribute to such theoretical issues in understanding autism.Entities:
Keywords: assessment; autism spectrum disorder (ASD); automatic recognition; body motion; face; intervention; voice
Mesh:
Year: 2019 PMID: 31689906 PMCID: PMC6864871 DOI: 10.3390/s19214787
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Original Articles on Sensor-Based Assessment in Terms of Identification of autism spectrum disorders (ASD)-related features.
| Domain of Behavior | Reference | Solution Name | Sensor/Parameters | Sample (Mean Age) | Setting and Stimuli | Results |
|---|---|---|---|---|---|---|
| Facial Movements | Samad et al., 2016 [ | Sony EVID70 Color Camera, 3dMD | 2D, 3D Imaging | ASD: 8 (13 y), TD: 8 (16 y) | Emotion Recognition Task: 12 3D Faces | ASD: Intense, Asymmetrical Facial Expressions with Lack of Differential Facial Muscle Actions |
| Facial Movements | Del Coco et al., 2017 [ | Webcam | 2D Imaging | ASD: 5 (5 y), TD: 5 (age-, Gender-Matched) | Watching 9 Emotion Eliciting Videos Taken from Famous Cartoons | ASD Descriptively Exhibited Less Facial Behavioral Complexity; Lower Face Seemed More Significant Than Upper Face to Distinguish Between TD-ASD |
| Facial Movements | Leo et al., 2018 [ | Off-The-Shelf Camera | 2D Imaging | ASD: 17 (9 y) | Emotion Production Task: 4 Expressions (Happiness, Sadness, Fear, Anger) | Descriptive Scores for Upper and Lower Face Parts |
| Facial Movements | Egger et al., 2018 [ | iPhone | 2D Imaging (Front Camera) | High Risk: 555 (3 y), Low Risk: 1199 (3 y) | Watching Video Clips | More Neutral and Less Positive Emotional Reactions in High-Risk Group |
| Facial Movements | Samad et al., 2019 [ | PrimeSense | 3D Imaging | ASD: 10 (14 y), TD: 10 (13 y) | Watching Story Content Narrated by The Animated Avatar | ASD: Overall Lower FAU Activation, Higher Activations of FAU 15, Limited Activation of FAUs In Response to Encountering Negative Emotional States; Concurrent Activations of Several FAU Pairs Are Found to Be Absent |
| Eye Gaze | Chawarska and Shic, 2009 [ | iView X™ RED | Eye-Tracking | ASD: 44, TD: 30 | Watching Color Images of Affectively Neutral Female Faces | ASD: Attended to Visual Scenes Containing Faces to A Similar Extent; ASD Look Less at Inner Facial Features, Older ASD Spent Less Time with Inner Features Than Younger, ASD Spent Less Time Looking at the Mouth |
| Eye Gaze | Liu et al., 2016 [ | Tobii T60 | Eye-Tracking | ASD: 29 (8 y), TD-Age: 29 (8 y), TD-Ability: 29 (6 y) | Face Memorization and Recognition Task | Classification Accuracy Predicting ASD: 88.51% (p < 001) |
| Eye Gaze | Król and Król, 2019 [ | SMI RED250 | Eye-Tracking | ASD: 21 (16 y), TD: 23 (16 y) | Face Perception Tasks with 60 Color Photographs of Faces (FACES Database) | Prediction ASD vs. TD Group Membership: Based on Only “Spatial” Information (M = 53.9%) Was Significantly Smaller Than That of the “Spatial + Temporal” Model (M = 55.5%) |
| Eye Gaze | Wang et al., 2015 [ | Tobii X300 | Eye-Tracking | ASD: 20 (31 y), TD:19 (32 y) | Free Viewing Task with 700 Natural Scene Images (OSIE Dataset) | ASD: Higher Saliency Weights for Low Level Properties, Lower Weights for Object and Semantic-Based Properties |
| Voice | Min and Tewfik, 2010 [ | Microphone | Audio, Accelerometer | ASD: 4 | No Context Given | 22/24 Vocal Stimming Events Detected by Classifier |
| Voice | Min and Fetzner, 2018 [ | Microphone | Audio, 2D Imaging | ASD: 4 | No Context Given | Trained Dictionaries Detect Vocal Stimming with Sensitivity: 73–93% |
| Voice | Marchi et al., 2015 [ | Zoom H1 Handy Recorder (Hebrew, English), Zoom H4 With RØDE NTG-2 Microphone (Swedish) | Audio | ASD: 7 (Hebrew), 11 (Swedish), 9 (English); TD:10 (Hebrew), 9 (Swedish), 9 (English) | Repeating Sentences From 9 Emotional Stories | ASD: Generally Perform Poorer, In English And Swedish Angry Was Poorly Performed, In Hebrew Afraid Was Poorly Performed |
| Voice | Ringeval et al., 2010 [ | Logitech USB Desktop Microphone | Audio | AD: 12 (10 y), PDD-NOS: 10 (10 y), SLI: 13 (10 y), TD: 73 (10 y) | Reading 26 Sentences with Certain Prosodic Dependencies (Descending, Falling, Floating, Rising) | AD: Intonation for Falling Floating and Especially Rising Was Worse Compared to TD |
| Body Movement | Gonçalves et al., 2014 [ | Microsoft Kinect | Color Depth Sensors, IR Emitter, Microphone | ASD: 5 (9 y) | playing session with robot | Good Detection of Hand Flapping with DTW, But Susceptible to Noise |
| Body Movement | Jazouli et al., 2019 [ | Microsoft Kinect V1 | Color Sensor, IR Depth Sensors, IR Emitter, Microphone | ASD: 5 (5–10 y), TD: 5 (Training Data) | No Context Given | 94% Overall Recognition Rate For Stereotyped Gesture Recognition ($P Algorithm) |
| Body Movement | Rynkiewicz et al., 2016 [ | Microsoft Kinect | Color Sensor, IR Depth Sensors, IR Emitter | ASD: 33 (5–10 y) | ADOS-2-Tasks (Cartoon Task, Demonstration Task) | ASD: Females Present Better Non-Verbal Skills (Gestures), Although Communication Skills Were Lower |
| Body Movement | Anzulewicz et al., 2016 [ | iPad | Touch Sensor; Accelerometer | ASD: 37 (4 y); TD: 45 (5 y) | Playing 2 Serious Games (Sharing, Coloring) | Differences in Pressure Going into the Device as Well as Differences in Gesture Kinematics and Form |
| Multimodal | Samad et al., 2017 [ | Sony EVI-D70, Mirametrix S2 | 2D Imaging, Eye Tracker | ASD: 8 (13 y); TD: 8 (16 y) | Emotion Recognition and Manipulation Task Of 3D Faces | ASD: Have Uncontrolled Manifestation of FAU 12, Spontaneous Facial Responses Are Not Synchronized with Their Visual Engagement with Facial Expressions, Poor Correlation in Dynamic Eye-Hand- Movements |
| Multimodal | Jaiswal et al., 2017 [ | Microsoft Kinect V2 | Color Sensor, IR Depth Sensors, IR Emitter, Microphone | ASD: 22; ADHD: 4, ASD + ADHD: 11, TD: 18 | Read and Listen to A Set Of 12 Short Stories (From ‘Strange Stories’ Task), Accompanied By 2–3 Questions | Classifier sensitivity for Control vs. Clinical Condition: 96.4%; ASD+ADHD vs. ASD: 93,9% |
| Social Behavior | Westeyn et al., 2012 [ | BlueSense, Video Cameras | Touch Sensors, Motion Sensor, Microphone, 2D Imaging | High-Risk: 1, TD: 10 Adults; 12 Children | Playing with Smart Toys with Scripted Play Prompts | Retrieval Score of 59% for a Single Child Using Models Constructed from Adult Play Data |
| Social Behavior | Anzalone et al., 2014 [ | Microsoft Kinect; Nao | Color Sensor, IR Depth Sensors, IR Emitter, Microphone | ASD: 16v (9 y); TD: 16 (8 y) | Nao Prompts JA by Gazing/By Gazing and Pointing/By Gazing, Pointing and Vocalizing at Pictures | ASD: Trunk Position Showed Less Stability in 4D Compared to TD Controls, Gazing Exploration Showed Less Accuracy |
| Social Behavior | Campbell et al., 2019 [ | iPad | 2D Imaging Sensor (Front Camera) | ASD: 22 (2 y); TD: 82 (2 y) | Reaction to Name-Calling While Watching Videos | ASD: Classifying by Atypical Orientation: Sensitivity: 96%, Specificity: 38% |
| Social Behavior | Petric et al., 2014 [ | Nao Robot | Cameras, Microphones, Ultrasound Range Sensors, Tactile Sensors, Force Sensitive Resistors, Accelerometers | ASD: 3 (5–8 y), TD: 1 (6 y) | ADOS-Tasks (Name Calling, JA, Play Request, Imitation) | Descriptively High Correspondence of Human Rater and Algorithm |
Note: Age in sample description typically refers to mean age of participants per group, with the exception of a few studies which report either age ranges, individual age of single cases, or did not specify exact age.
Original Articles on Sensor-Based Supporting Interventions in ASD.
| Domain of Behavior | Reference | Solution Name | Sensor/Parameters | Sample (Mean Age) | Setting and Stimuli | Results |
|---|---|---|---|---|---|---|
| Emotion | Gordon et al., 2014 [ | Webcam | 2D Imaging Sensor | ASD: 30 (11 y), TD: 23 (11 y) | Playing Computer Game (FaceMaze) | ASD: Increase in Happy and Angry Expression Performance |
| Emotion | Leo et al., 2015 [ | Camera, Robokind™ R25 Robot | 2D Imaging Sensor | ASD: 3 | Imitate Expression from Robot (Happiness, Sadness, Anger, And Fear) | 31/60 Interactions Recognized; 19/60 No Imitation |
| Emotion | Piana et al., 2019 [ | Microsoft Kinect V2 | Color Sensor, IR depth Sensors, IR Emitter | ASD: 10 (10 y) | 10 Sessions Body Emotion Expression and Recognition Task | Increased Accuracy in Expression and Recognition After Training Sessions In The Trained Group, Transfer Effect On Facial Expression Recognition |
| Social Skills | Robins et al., 2010 [ | KASPAR, Video Cameras | Touch Sensor, 2D Imaging | ASD: 3 | Unconstrained Interaction with The Robot | Interaction Evaluation Through Sensor Activation, Differential Interaction and Maintenance of Interaction |
| Social Skills | Costa et al., 2009 [ | LEGO Mindstorms TM, Video Cameras | Touch Sensor, Sound Sensor | ASD: 2 (17, 19 y) | 4- 5 Sessions of Feedbacked Interaction with Robot | Interaction Evaluation Through Sensor Activation, Differential Interaction and Maintenance of Interaction |
| Social Skills | Wong and Zhong, 2016 [ | CuDDler Robot, Video Camera | Microphone, Contact Microphone, Tactile and Posture Sensors | ASD: 8 (5 y) | 5 Sessions of ABA (Didactic Teaching Followed by Role Modeling by Either A Robot (RT) Or Human (CT) | Robot Training Significantly Facilitated Verbal and Gestural Communicative Skills, Increased Eye Contact Duration |
| Social Skills | Uzuegbunam et al., 2015 [ | Microsoft Kinect | Color Sensor, IR Depth Sensors, IR Emitter, Microphone | ASD: 3 (7–12 y) | Greeting Game with Participants Face, Reacting to Participant, Getting Appraisal | All 3 showed Increased Social Greeting Behavior Throughout and After Intervention |
| Social Skills | Mower et al., 2011 [ | HDR-SR12 High Definition Handycam Camcorders | Microphones, 2D Video Sensors | ASD: 2 (6, 12 y) | 4 Sessions with Embodied Conversational Agent RACHEL Going Through Emotional Scenarios | Tool for Eliciting Interactive Behavior |
Note: Age in sample description, where specified in a study, either refer to mean age of participants per group, to age ranges, or to individual ages in small samples of single cases.