| Literature DB >> 32575798 |
Jiacan Xu1, Hao Zheng2, Jianhui Wang1, Donglin Li1, Xiaoke Fang1.
Abstract
Recognition of motor imagery intention is one of the hot current research focuses of brain-computer interface (BCI) studies. It can help patients with physical dyskinesia to convey their movement intentions. In recent years, breakthroughs have been made in the research on recognition of motor imagery task using deep learning, but if the important features related to motor imagery are ignored, it may lead to a decline in the recognition performance of the algorithm. This paper proposes a new deep multi-view feature learning method for the classification task of motor imagery electroencephalogram (EEG) signals. In order to obtain more representative motor imagery features in EEG signals, we introduced a multi-view feature representation based on the characteristics of EEG signals and the differences between different features. Different feature extraction methods were used to respectively extract the time domain, frequency domain, time-frequency domain and spatial features of EEG signals, so as to made them cooperate and complement. Then, the deep restricted Boltzmann machine (RBM) network improved by t-distributed stochastic neighbor embedding(t-SNE) was adopted to learn the multi-view features of EEG signals, so that the algorithm removed the feature redundancy while took into account the global characteristics in the multi-view feature sequence, reduced the dimension of the multi-visual features and enhanced the recognizability of the features. Finally, support vector machine (SVM) was chosen to classify deep multi-view features. Applying our proposed method to the BCI competition IV 2a dataset we obtained excellent classification results. The results show that the deep multi-view feature learning method further improved the classification accuracy of motor imagery tasks.Entities:
Keywords: brain-computer interface (BCI); deep neural network; electroencephalography (EEG); multi-view learning; parametric t-distributed stochastic neighbor embedding (p.t-SNE)
Mesh:
Year: 2020 PMID: 32575798 PMCID: PMC7349253 DOI: 10.3390/s20123496
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Framework of the proposed method.
Figure 2Timing scheme of a trial in BCI Competition IV 2a dataset.
Figure 3A network structure of Restricted Boltzmann Machine (RBM). (a) Schematic diagram of four-layers RBM network; (b) Schematic diagram of the number of hidden layer nodes in each layer of RBM network.
Figure 4Schematic diagram of t-SNE adjusted the RBM pre-training network.
Figure 5The optimal selection of parameters and .
Classification accuracy and kappa score comparison between Multi-view feature and Single-view applied on BCI competition IV dataset 2A.
| Methods | Single-View Time Domain | Single-View Frequency Domain | Single-View Time-Frequency | Multi-View |
|---|---|---|---|---|
| Subject 1 | 75.8929 (0.6786) | 80.3571 (0.7381) | 79.4643 (0.7262) | 86.6071 (0.8214) |
| Subject 2 | 50.4505 (0.3399) | 54.9550 (0.3994) | 52.2523 (0.3635) | 61.2613 (0.4838) |
| Subject 3 | 76.3636 (0.6849) | 80.9091 (0.7454) | 79.0909 (0.7211) | 87.2727 (0.7696) |
| Subject 4 | 58.8000 (0.4506) | 70.8000 (0.6102) | 64.6000 (0.5278) | 75.2000 (0.6664) |
| Subject 5 | 47.2727 (0.2954) | 45.4545 (0.2709) | 50.9091 (0.3439) | 64.5455 (0.5024) |
| Subject 6 | 44.3182 (0.2571) | 48.8636 (0.3179) | 47.7273 (0.3016) | 65.9091 (0.5301) |
| Subject 7 | 77.4775 (0.6995) | 80.1802 (0.7357) | 76.5766 (0.6875) | 83.7838 (0.7837) |
| Subject 8 | 79.8165 (0.7308) | 80.7339 (0.7431) | 75.2294 (0.6697) | 89.9083 (0.8655) |
| Subject 9 | 83.1683 (0.7752) | 82.1782 (0.7620) | 72.2772 (0.6293) | 92.0792 (0.8942) |
| Average | 65.9511 (0.5458) | 69.3812 (0.5914) | 66.4586 (0.5523) | 78.5074 (0.6278) |
Classification accuracy comparison with other published results applied on BCI competition IV dataset 2A. The best result for each subject is displayed in bold characters.
| Methods | FBCSP [ | BO [ | Monolithic Network [ | FBCSP-SVM [ | CW-CNN [ | SCSSP [ | DFFN [ | Proposed Methods |
|---|---|---|---|---|---|---|---|---|
| Subject 1 | 76.00 | 82.12 | 83.13 | 82.29 | 86.11 | 67.88 | 83.20 |
|
| Subject 2 | 56.50 | 44.86 | 65.45 | 60.42 | 60.76 | 42.18 |
| 61.2613 |
| Subject 3 | 81.25 | 86.6 | 80.29 | 82.99 | 86.81 | 77.87 |
| 87.2727 |
| Subject 4 | 61.00 | 66.28 |
| 72.57 | 67.36 | 51.77 | 69.42 | 75.2000 |
| Subject 5 | 55.00 | 48.72 |
| 60.07 | 62.50 | 50.17 | 61.65 | 64.5455 |
| Subject 6 | 45.25 | 53.3 |
| 44.10 | 45.14 | 45.97 | 60.74 | 65.9091 |
| Subject 7 | 82.75 | 72.64 | 84.00 | 86.11 |
| 87.5 | 85.18 | 83.7838 |
| Subject 8 | 81.25 | 82.33 | 82.66 | 77.08 | 81.25 | 85.79 | 84.21 |
|
| Subject 9 | 70.75 | 76.35 | 80.74 | 75.00 | 77.08 | 76.31 | 85.48 |
|
| Average | 67.75 | 68.13 | 78.41 | 71.18 | 73.07 | 65.05 | 76.44 |
|
Kappa value comparison with other published results applied on BCI competition IV dataset 2A. The best result for each subject is displayed in bold characters.
| Methods | SS-MEMDBF [ | Miao et al. [ | Monolithic Network [ | FBCSP-SVM [ | CW-CNN [ | sMLR [ | TSSM-SVM [ | Proposed Methods |
|---|---|---|---|---|---|---|---|---|
| Subject 1 |
| 0.6481 | 0.67 | 0.7640 | 0.8150 | 0.7407 | 0.70 | 0.8214 |
| Subject 2 | 0.24 | 0.3657 | 0.35 | 0.4720 | 0.4770 | 0.2685 | 0.32 |
|
| Subject 3 | 0.70 | 0.6632 | 0.65 | 0.7730 |
| 0.7685 | 0.75 | 0.7696 |
| Subject 4 |
| 0.5046 | 0.62 | 0.6340 | 0.5650 | 0.4259 | 0.54 | 0.6664 |
| Subject 5 | 0.36 | 0.3241 |
| 0.4680 | 0.5000 | 0.2870 | 0.32 | 0.5024 |
| Subject 6 | 0.34 | 0.2963 | 0.45 | 0.2550 | 0.2690 | 0.2685 | 0.34 |
|
| Subject 7 | 0.66 | 0.7188 | 0.69 | 0.8150 |
| 0.7315 | 0.70 | 0.7837 |
| Subject 8 | 0.75 | 0.6354 | 0.70 | 0.6940 | 0.7500 | 0.7685 | 0.69 |
|
| Subject 9 | 0.82 | 0.6458 | 0.64 | 0.6670 | 0.6940 | 0.7963 | 0.77 |
|
| Average | 0.60 | 0.5336 | 0.59 | 0.6160 |
| 0.5617 | 0.571 | 0.6278 |
Accuracy comparison with other classifiers. The best result for each subject is displayed in bold characters.
| Methods | Decision Tree | LDA | KNN | NB | SD | SVM |
|---|---|---|---|---|---|---|
| Subject 1 | 74.5 | 85.4 | 75.3 | 82.7 | 86.3 |
|
| Subject 2 | 43.9 | 59.0 | 47.7 | 50.5 | 58.2 |
|
| Subject 3 | 78.8 | 86.9 | 77.5 | 85.6 |
| 87.2727 |
| Subject 4 | 48.8 | 68.2 | 50.6 | 63.9 | 72.4 |
|
| Subject 5 | 52.2 | 59.6 | 55.4 | 57.4 |
| 64.5455 |
| Subject 6 | 44.2 | 65.2 | 53.7 | 62.7 | 65.7 |
|
| Subject 7 | 73.2 | 80.3 | 72.4 | 81.2 | 82.1 |
|
| Subject 8 | 70.8 | 87.7 | 75.3 | 84.1 | 89.5 |
|
| Subject 9 | 75.6 | 81.4 | 77.4 | 77.6 | 88.4 |
|
| Average | 54.64 | 74.86 | 65.03 | 71.74 | 77.46 | 78.5074 |
Figure 6Visualization of single-view and multi-view features of EEG signals: (a) Single-view features of time domain signals; (b) Single-view features of frequency domain signals; (c) Single-view features of time-frequency domain signals; (d) Multi-view features.