| Literature DB >> 36177358 |
Rui Li1,2, Di Liu1, Zhijun Li1, Jinli Liu1, Jincao Zhou1, Weiping Liu2, Bo Liu1, Weiping Fu1, Ahmad Bala Alhassan3.
Abstract
Multiple types of brain-control systems have been applied in the field of rehabilitation. As an alternative scheme for balancing user fatigue and the classification accuracy of brain-computer interface (BCI) systems, facial-expression-based brain control technologies have been proposed in the form of novel BCI systems. Unfortunately, existing machine learning algorithms fail to identify the most relevant features of electroencephalogram signals, which further limits the performance of the classifiers. To address this problem, an improved classification method is proposed for facial-expression-based BCI (FE-BCI) systems, using a convolutional neural network (CNN) combined with a genetic algorithm (GA). The CNN was applied to extract features and classify them. The GA was used for hyperparameter selection to extract the most relevant parameters for classification. To validate the superiority of the proposed algorithm used in this study, various experimental performance results were systematically evaluated, and a trained CNN-GA model was constructed to control an intelligent car in real time. The average accuracy across all subjects was 89.21 ± 3.79%, and the highest accuracy was 97.71 ± 2.07%. The superior performance of the proposed algorithm was demonstrated through offline and online experiments. The experimental results demonstrate that our improved FE-BCI system outperforms the traditional methods.Entities:
Keywords: EEG; brain computer interface; convolutional neural network (CNN); facial expression; genetic algorithm
Year: 2022 PMID: 36177358 PMCID: PMC9513431 DOI: 10.3389/fnins.2022.988535
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
Figure 1NeuSen-W64 EEG recording system and channel locations. (A) NeuSen-W64 EEG recording system. (B) Selected eight-channel configuration in NeuSen-W64.
Figure 2FE-BCI system and its experimental protocol. (A) Experimental protocol for the offline experiment. (B) Scheme of FE-BCI system for an intelligent car.
Figure 3Architecture of the proposed CNN model.
Figure 4Architecture of the proposed GA.
Figure 5Scheme of the proposed CNN–GA algorithm.
Figure 6Performance from the CNN and CNN–GA algorithms. (A) The accuracy result from CNN model and CNN combined GA model. (B) The loss value from CNN model and CNN combined GA model.
Figure 7Hyperparameter optimization performance from the CNN combined GA. (A) The process of hyperparameter optimization by GA. (B) The confusion matrix from CNN and CNN combined GA.
Offline accuracies of the CNN–GA for S2.
|
|
|
|
|
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Acc (%) | 97.92 | 100 | 97.92 | 95.83 | 93.75 | 95.83 | 97.92 | 100 | 100 | 97.92 | 97.71 ± 2.07 |
Averaged accuracies for each subject under the CNN and CNN–GA methods.
|
|
|
| ||
|---|---|---|---|---|
|
|
|
|
| |
| S1 | 84.90 ± 5.77 | 0.80 | 87.06 ± 4.51 | 0.83 |
| S2 | 94.48 ± 3.90 | 0.93 | 97.71 ± 2.07 | 0.96 |
| S3 | 80.52 ± 4.74 | 0.74 | 85.21 ± 4.92 | 0.80 |
| S4 | 72.80 ± 6.22 | 0.61 | 76.43 ± 7.13 | 0.69 |
| S5 | 88.75 ± 7.13 | 0.85 | 91.46 ± 5.27 | 0.89 |
| S6 | 92.29 ± 6.93 | 0.90 | 96.88 ± 2.75 | 0.96 |
| S7 | 86.77 ± 6.56 | 0.82 | 91.25 ± 2.75 | 0.88 |
| S8 | 82.55 ± 6.22 | 0.77 | 86.09 ± 3.96 | 0.81 |
| S9 | 83.13 ± 3.29 | 0.78 | 84.90 ± 2.27 | 0.80 |
| S10 | 92.81 ± 16.29 | 0.92 | 95.83 ± 3.38 | 0.94 |
| S11 | 89.27 ± 6.53 | 0.91 | 94.79 ± 3.54 | 0.93 |
| S12 | 95.63 ± 6.34 | 0.94 | 96.77 ± 2.19 | 0.96 |
| S13 | 78.54 ± 5.97 | 0.71 | 81.88 ± 3.39 | 0.76 |
| S14 | 81.46 ± 6.08 | 0.75 | 83.52 ± 3.02 | 0.78 |
| S15 | 81.15 ± 6.06 | 0.75 | 85.21 ± 4.87 | 0.80 |
| S16 | 90.00 ± 6.07 | 0.87 | 93.02 ± 3.90 | 0.91 |
| Avg ± Std | 85.94 ± 6.51 | 0.816 | 89.21 ± 3.79 | 0.856 |
Averaged accuracies of each subject under WT–BPNN and CNN–GA methods.
|
| ||||
|---|---|---|---|---|
|
|
|
| ||
|
|
|
|
| |
| S1 | 0.754 | 81.56 ± 6.77 | 0.828 | 87.06 ± 4.51 |
| S2 | 0.874 | 90.42 ± 6.57 | 0.970 | 97.19 ± 2.81 |
| S3 | 0.683 | 76.25 ± 8.18 | 0.803 | 85.21 ± 4.92 |
| S4 | 0.613 | 70.94 ± 7.60 | 0.686 | 76.43 ± 7.13 |
| S5 | 0.774 | 83.02 ± 8.64 | 0.886 | 91.46 ± 5.27 |
| S6 | 0.879 | 90.94 ± 4.07 | 0.958 | 96.88 ± 2.75 |
| S7 | 0.722 | 79.17 ± 9.17 | 0.883 | 91.25 ± 2.75 |
| S8 | 0.739 | 80.42 ± 6.25 | 0.815 | 86.09 ± 3.96 |
| S9 | 0.742 | 80.63 ± 6.04 | 0.799 | 84.90 ± 2.27 |
| S10 | 0.803 | 85.21 ± 6.48 | 0.944 | 95.83 ± 3.38 |
| S11 | 0.828 | 87.08 ± 8.78 | 0.931 | 94.79 ± 3.54 |
| S12 | 0.833 | 87.50 ± 10.71 | 0.957 | 96.77 ± 2.19 |
| S13 | 0.667 | 75.00 ± 7.34 | 0.758 | 81.88 ± 3.39 |
| S14 | 0.690 | 76.77 ± 7.26 | 0.780 | 83.52 ± 3.02 |
| S15 | 0.701 | 77.60 ± 7.26 | 0.803 | 85.21 ± 4.87 |
| S16 | 0.776 | 83.23 ± 6.63 | 0.907 | 93.02 ± 3.90 |
| Mean accuracy | 0.755 ± 0.076 | 81.60 ± 7.36 | 0.857 ± 0.084 | 89.21 ± 3.79 |
Figure 8Offline classification accuracies and standard deviations under the three methods.
Figure 9Online scenario and optimal recognition performance within a single session for S2. (A) The online scene. (B) One representative decision procedure from S2.
Averaged accuracies of each subject in the online task.
|
|
| ||||
|---|---|---|---|---|---|
|
|
|
|
|
| |
| S1 | 79.41 | 83.33 | 73.33 | 88.89 | 81.24 ± 6.55 |
| S2 | 97.06 | 88.89 | 96.67 | 100 | 95.65 ± 4.75 |
| S3 | 70.59 | 77.78 | 100 | 94.44 | 85.70 ± 13.81 |
| S4 | 76.47 | 77.78 | 83.33 | 77.78 | 78.84 ± 3.06 |
| S5 | 91.18 | 83.33 | 93.33 | 94.44 | 90.57 ± 5.01 |
| S6 | 94.12 | 100 | 96.67 | 94.44 | 96.31 ± 2.71 |
| S7 | 94.17 | 88.89 | 90.00 | 77.78 | 85.56 ± 6.76 |
| S8 | 97.06 | 88.89 | 93.33 | 94.44 | 93.43 ± 3.41 |
| S9 | 85.29 | 83.33 | 83.33 | 72.22 | 81.04 ± 5.95 |
| S10 | 94.12 | 100 | 96.67 | 94.44 | 96.31 ± 2.71 |
| S11 | 91.17 | 83.33 | 76.67 | 88.89 | 85.01 ± 6.46 |
| S12 | 97.06 | 88.89 | 93.33 | 88.89 | 92.04 ± 3.95 |
| S13 | 79.41 | 88.89 | 83.33 | 83.33 | 83.74 ± 3.90 |
| S14 | 85.29 | 77.78 | 83.33 | 77.78 | 81.05 ± 3.85 |
| S15 | 82.35 | 83.33 | 90.00 | 77.78 | 83.36 ± 5.04 |
| S16 | 85.29 | 77.78 | 80.00 | 72.22 | 78.82 ± 5.41 |
| Mean ± Std | 87.50 ± 8.22 | 85.41 ± 6.37 | 88.33 ± 7.98 | 85.76 ± 8.83 | 86.61 ± 6.06 |
Performance comparison of previous work based on the FE-BCI.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
| Cheng et al. ( | EEG (P300) | P 300 evoked | Features extracted by calculating the percentiles of EEG; Classified by Bayesian linear discriminant analysis | Referring previous study | 91.9 |
| Tian et al. ( | EEG(N170) | N170 extracted by dimensionality reduction and normalization; Classified by L1-Regularized Logistic Regression | 86.4 | ||
| Thammasan et al. ( | EEG | Music | Features extracted by Higuchi algorithm; Classified by SVM | By experience | 85.0 |
| Ozerdem and Polat ( | EEG | Film chips | Features extracted by wavelet transform; classified by MLPNN | Referring previous study | 77.14 |
| Zheng et al. ( | Features extracted by STFT; classified by Graph Regularized Extreme Learning Machine | 69.67 | |||
| Huang et al. ( | • Picture information • EEG | • Face pictures • Facial expression | • Picture feature extracted by AdaBoost and classified by neural network classifier • EEG feature extracted by STFT and classified by SVM | Burte-Force Searching | 82.75 |
| Toth and Arvaneh ( | • EEG • Gyroscope | Facial expression | Feature extracted by FFT; classified by SVM-LDA-Bayesian | By experience | 70.3 |
| Li et al. ( | EEG | Features extracted by wavelet transform; classified by BPNN | 81.28 | ||
| The proposed study | Features extracted and classified by CNN | By GA | 89.21 |