| Literature DB >> 35677082 |
Huanghao Feng1, Mohammad H Mahoor2, Francesca Dino2.
Abstract
Children with Autism Spectrum Disorder (ASD) experience deficits in verbal and nonverbal communication skills including motor control, turn-taking, and emotion recognition. Innovative technology, such as socially assistive robots, has shown to be a viable method for Autism therapy. This paper presents a novel robot-based music-therapy platform for modeling and improving the social responses and behaviors of children with ASD. Our autonomous social interactive system consists of three modules. Module one provides an autonomous initiative positioning system for the robot, NAO, to properly localize and play the instrument (Xylophone) using the robot's arms. Module two allows NAO to play customized songs composed by individuals. Module three provides a real-life music therapy experience to the users. We adopted Short-time Fourier Transform and Levenshtein distance to fulfill the design requirements: 1) "music detection" and 2) "smart scoring and feedback", which allows NAO to understand music and provide additional practice and oral feedback to the users as applicable. We designed and implemented six Human-Robot-Interaction (HRI) sessions including four intervention sessions. Nine children with ASD and seven Typically Developing participated in a total of fifty HRI experimental sessions. Using our platform, we collected and analyzed data on social behavioral changes and emotion recognition using Electrodermal Activity (EDA) signals. The results of our experiments demonstrate most of the participants were able to complete motor control tasks with 70% accuracy. Six out of the nine ASD participants showed stable turn-taking behavior when playing music. The results of automated emotion classification using Support Vector Machines illustrates that emotional arousal in the ASD group can be detected and well recognized via EDA bio-signals. In summary, the results of our data analyses, including emotion classification using EDA signals, indicate that the proposed robot-music based therapy platform is an attractive and promising assistive tool to facilitate the improvement of fine motor control and turn-taking skills in children with ASD.Entities:
Keywords: autism; emotion classification; motor control; music therapy; social robotics; turn-taking
Year: 2022 PMID: 35677082 PMCID: PMC9169087 DOI: 10.3389/frobt.2022.855819
Source DB: PubMed Journal: Front Robot AI ISSN: 2296-9144
FIGURE 1Mallet griper.
FIGURE 2Instrument stand front view.
FIGURE 3Experiment room.
FIGURE 4Experiment session illustration.
FIGURE 5The distribution of the targeted emotions across all subjects and events.
FIGURE 6Block diagram of Module-based acoustic music interactive system.
FIGURE 7Color detection from NAO’s bottom camera: (A) single blue color detection (B) full instrument color detection (C) color based edge detection.
FIGURE 8Melody detection with Short Time Fourier Transform.
FIGURE 9Motor control accuracy result.
FIGURE 10Main music therapy performance accuracy.
FIGURE 11Normalized turn-taking behavior result for all subjects during intervention sessions.
Emotion change in different events using wavelet-based feature extraction under SVM classifier.
| Kernels | Accuracy (%) | AUC | Precision (%) | Recall (%) | |
|---|---|---|---|---|---|
| S1 vs. S2 | Linear | 75 | 0.78 | 76 | 72 |
| S1 vs. S3 | 57 | 0.59 | 56 | 69 | |
| S2 vs. S3 | 69 | 0.72 | 64 | 86 | |
| S1 vs. S2 vs. S3 | 52 | ||||
| S1 vs. S2 | Polynomial | 66 | 0.70 | 70 | 54 |
| S1 vs. S3 | 64 | 0.66 | 62 | 68 | |
| S2 vs. S3 | 65 | 0.68 | 62 | 79 | |
| S1 vs. S2 vs. S3 | 50 | ||||
| S1 vs. S2 | RBF | 76 | 0.81 | 76 | 75 |
| S1 vs. S3 | 57 | 0.62 | 57 | 69 | |
| S2 vs. S3 | 70 | 0.76 | 66 | 83 | |
| S1 vs. S2 vs. S3 | 53 |
Emotion change classification performance in single event with segmentation using both SVM and KNN classifier.
| Segmentation Comparison in Single Task | ||||||||
|---|---|---|---|---|---|---|---|---|
| Warm up Section | Song Practice Section | |||||||
| Kernels | Accuracy (%) | K value | Accuracy (%) | Kernels | Accuracy (%) | K value | Accuracy (%) | |
| learn vs. play | Linar | 52.62 | K = 1 | 54 | Linar | 53.79 | K = 1 | 52.41 |
| learn vs. feedback | 53.38 | 50.13 | 53.1 | 51.72 | ||||
| play vs. feedback | 47.5 | 50.38 | 54.31 | 50.86 | ||||
| learn vs. play vs. feedback | 35.08 | 36.25 | 35.52 | 36.55 | ||||
| learn vs. play | Polynomial | 49 | K = 3 | 50.25 | Polynomial | 53.79 | K = 3 | 50.69 |
| learn vs. feedback | 50.75 | 50.13 | 50.86 | 50.34 | ||||
| play vs. feedback | 49.87 | 49.5 | 49.14 | 52.07 | ||||
| learn vs. play vs. feedback | 33.92 | 35.83 | 34.71 | 35.29 | ||||
| learn vs. play | RBF | 54.38 | K = 5 | 48.37 | RBF | 50.86 | K = 5 | 50.17 |
| learn vs. feedback | 55.75 | 52.75 | 53.97 | 50.17 | ||||
| play vs. feedback | 51.12 | 50 | 53.79 | 52.93 | ||||
| learn vs. play vs. feedback | 36.83 | 34.17 | 34.83 | 33.1 | ||||
Classification rate in children learn, children play and robot feedback across warm up (S1) and music practice (S2) sessions.
| Accuracy of SVM | Accuracy of KNN | |||||
|---|---|---|---|---|---|---|
| Linear | Polynomial | RBF | K = 1 | K = 3 | K = 5 | |
| learn 1 vs. learn 2 | 73.45 | 69.31 | 80.86 | 73.28 | 71.03 | 65 |
| play 1 vs. play 2 | 75.34 | 68.79 | 80 | 74.48 | 69.14 | 64.31 |
| feedback 1 vs. feedback 2 | 76.38 | 69.48 | 80.34 | 74.14 | 69.14 | 66.9 |
TD vs. ASD emotion changes from baseline and exit sessions.
| Linear | Polynomial | RBF | |
|---|---|---|---|
| Accuracy | 75 | 62.5 | 80 |
| Confusion Matrix | 63 37 | 50 50 | 81 19 |
| 12 88 | 25 75 | 25 75 |