| Literature DB >> 35368925 |
Mengyi Liao1,2, Hengyao Duan1, Guangshuai Wang2.
Abstract
Early detection of autism spectrum disorder (ASD) is highly beneficial to the health sustainability of children. Existing detection methods depend on the assessment of experts, which are subjective and costly. In this study, we proposed a machine learning approach that fuses physiological data (electroencephalography, EEG) and behavioral data (eye fixation and facial expression) to detect children with ASD. Its implementation can improve detection efficiency and reduce costs. First, we used an innovative approach to extract features of eye fixation, facial expression, and EEG data. Then, a hybrid fusion approach based on a weighted naive Bayes algorithm was presented for multimodal data fusion with a classification accuracy of 87.50%. Results suggest that the machine learning classification approach in this study is effective for the early detection of ASD. Confusion matrices and graphs demonstrate that eye fixation, facial expression, and EEG have different discriminative powers for the detection of ASD and typically developing children, and EEG may be the most discriminative information. The physiological and behavioral data have important complementary characteristics. Thus, the machine learning approach proposed in this study, which combines the complementary information, can significantly improve classification accuracy.Entities:
Mesh:
Year: 2022 PMID: 35368925 PMCID: PMC8975630 DOI: 10.1155/2022/9340027
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1The experimental scene and data analysis framework. Note. Data were collected using Tobii Eye Tracker, Emotiv EPOC+, and camera, providing the eye fixation, EEG, and facial expression data, respectively.
Figure 2Different AOI divided by K-means algorithm. (a) K = 8, (b) K = 12, (c) K = 16, (d) K = 20.
Figure 3The framework of facial expression recognition based on convolutional neural network (CNN) and soft label.
t-test on the power of each band in different brain regions of children with ASD and TD.
|
|
|
| Mean difference | Std. error difference | ||
|---|---|---|---|---|---|---|
| Theta | LF | 7.38 | 5.82 | 0.00 | 4.55 | 0.72 |
| RF | 1.85 | 3.02 | 0.01 | 3.46 | 1.08 | |
| LT | 1.80 | −0.81 | 0.42 | −0.94 | 1.07 | |
| RT | 13.53 | 2.91 | 0.01 | 3.82 | 1.20 | |
| P | 11.47 | 3.67 | 0.00 | 4.16 | 1.12 | |
| O | 13.50 | 4.72 | 0.00 | 4.44 | 0.88 | |
|
| ||||||
| Alpha | LF | 0.30 | 2.01 | 0.05 | 1.27 | 0.63 |
| RF | 0.41 | 1.77 | 0.09 | 1.27 | 0.73 | |
| LT | 0.22 | −0.86 | 0.40 | −0.49 | 0.52 | |
| RT | 0.92 | 1.10 | 0.28 | 0.92 | 0.84 | |
| P | 7.52 | 2.40 | 0.02 | 1.85 | 0.73 | |
| O | 5.74 | 2.91 | 0.01 | 1.77 | 0.58 | |
| Low beta | LF | 1.31 | 2.75 | 0.01 | 0.86 | 0.31 |
| RF | 2.90 | 2.92 | 0.01 | 0.95 | 1.78 | |
| LT | 1.33 | 0.36 | 0.72 | 0.18 | 0.50 | |
| RT | 1.25 | 1.73 | 0.09 | 0.83 | 0.55 | |
| P | 9.16 | 2.49 | 0.02 | 1.94 | 0.75 | |
| O | 8.48 | 3.65 | 0.00 | 0.87 | 0.23 | |
|
| ||||||
| High beta | LF | 1.16 | 1.80 | 0.08 | 0.75 | 0.40 |
| RF | 2.63 | 1.80 | 0.08 | 0.83 | 0.47 | |
| LT | 4.17 | 0.78 | 0.44 | 0.73 | 0.88 | |
| RT | 0.06 | 0.28 | 0.79 | 0.25 | 0.91 | |
| P | 5.02 | 1.59 | 0.12 | 1.65 | 1.00 | |
| O | 2.73 | 1.43 | 0.16 | 0.33 | 0.20 | |
|
| ||||||
| Gamma | LF | 5.09 | 2.28 | 0.03 | 1.25 | 0.55 |
| RF | 3.16 | 1.77 | 0.09 | 1.32 | 0.74 | |
| LT | 0.11 | −0.22 | 0.83 | −0.28 | 1.13 | |
| RT | 2.28 | −0.91 | 0.37 | −1.16 | 1.14 | |
| P | 3.69 | 1.43 | 0.16 | 0.95 | 0.62 | |
| O | 1.26 | 1.09 | 0.28 | 0.20 | 0.16 | |
Note. LF = Left frontal, RF = right frontal, LT = left temporal, RT = right temporal, P = parietal, O = occipital, AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4 are 14 channels defined by the international 10–20 system. p < 0.05. p < 0.01.
Figure 4Hybrid fusion framework based on weighted naive Bayes algorithm.
Accuracies of single modality classification (%).
| Classifier | Eye fixation data | Facial expression data | EEG data |
|---|---|---|---|
| RF | 73.75 | 77.50 | 83.75 |
| SVM | 65.00 | 61.25 | 65.00 |
| KNN | 70.00 | 65.00 | 72.50 |
| AVG | 67.50 | 71.56 | 74.69 |
Note. RF, SVM, and KNN represent decision tree, random forest, support-vector machine, and K-nearest neighbor, respectively.
Accuracies of different classification methods (%).
| Data | Accuracies of classification |
|---|---|
| Physiological data classification | 83.75 |
| Behavioral data classification | 85.00 |
| Hybrid fusion classification | 87.50 |
Figure 5Confusion matrices of single modality classification and hybrid fusion classification. Note. The row of each of the confusion matrices represents the predicted class, and the column represents the target class. The element (i, j) is the percentage of samples in class j that is predicted as class i. (a) Eye fixation. (b) Facial expression. (c) EEG. (d) Hybrid fusion based on weighted naive Bayes.
Figure 6Confusion graph of single modality, showing their complementary characteristics for identification ASD and TD. The numbers represent the percentage of samples in the class of arrow tail predicted as the class of head. (a) The complementary characteristics of eye fixation and facial expression. (b) The complementary characteristics of eye fixation and EEG. (c) The complementary characteristics of facial expression and EEG.