| Literature DB >> 35336418 |
Xuying Wang1,2, Rui Yang1,3, Mengjie Huang4.
Abstract
Brain-computer interface (BCI) research has attracted worldwide attention and has been rapidly developed. As one well-known non-invasive BCI technique, electroencephalography (EEG) records the brain's electrical signals from the scalp surface area. However, due to the non-stationary nature of the EEG signal, the distribution of the data collected at different times or from different subjects may be different. These problems affect the performance of the BCI system and limit the scope of its practical application. In this study, an unsupervised deep-transfer-learning-based method was proposed to deal with the current limitations of BCI systems by applying the idea of transfer learning to the classification of motor imagery EEG signals. The Euclidean space data alignment (EA) approach was adopted to align the covariance matrix of source and target domain EEG data in Euclidean space. Then, the common spatial pattern (CSP) was used to extract features from the aligned data matrix, and the deep convolutional neural network (CNN) was applied for EEG classification. The effectiveness of the proposed method has been verified through the experiment results based on public EEG datasets by comparing with the other four methods.Entities:
Keywords: brain–computer interface; common spatial pattern; electroencephalography; motor imagery; transfer learning
Mesh:
Year: 2022 PMID: 35336418 PMCID: PMC8950019 DOI: 10.3390/s22062241
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The experiment flowchart.
Figure 2Experiment paradigm.
Figure 3Motor imagery EEG signal of C3 of two classes.
Figure 4Three layers wavelet decomposition of C3. (a) Original signal; (b) Reconstructed signal.
Figure 5t-SNE visualization before (left) and after (right) EA.
Figure 6CSP feature extraction.
Figure 7The structure of CNN.
Figure 8The training process and loss of CNN.
Details of the network structure.
| No | Layer | Options |
|---|---|---|
| 0 | Input EEG | size = (250,250,1) |
| 1 | Convolutional layer | size = (250,250,1), kernel size = (11,11,32), padding = (1,1) |
| 2 | Maxpooling layer | size = (120,120,32), kernel size = (2,2,32), padding = (2,2) |
| 3 | Convolutional layer | size = (110,100,32), kernel size = (11,11,32), padding = (1,1) |
| 4 | Convolutional layer | size = (100,100,32), kernel size = (11,11,32), padding = (1,1) |
| 5 | Maxpooling layer | size = (50,50,32), kernel size = (2,2,32), padding = (2,2) |
| 6 | Convolutional layer | size = (44,44,64), kernel size = (7,7,64), padding = (1,1) |
| 7 | Maxpooling layer | size = (22,22,32), kernel size = (2,2,64), padding = (2,2) |
| 8 | Convolutional layer | size = (20,20,128), kernel size = (3,3,128), padding = (1,1) |
| 9 | Maxpooling layer | size = (10,10,128), kernel size = (2,2,128), padding = (2,2) |
| 10 | Convolutional layer | size = (8,8,128), kernel size = (3,3,128), padding = (1,1) |
| 11 | Maxpooling layer | size = (4,4,128), kernel size = (2,2,128), padding = (2,2) |
| 12 | Fully-Connected layer | size = (2048,1) |
| 13 | Softmax layer | size = 2 |
Figure 9The flowchart of fine-tuning CNN.
Overall classification accuracy (%).
| Target Subject | EA-CSP-SVM | EA-ftCNN | EA-CSP-CNN |
|---|---|---|---|
| S11 | 69 | 73 | 79 |
| S12 | 72 | 64 | 87 |
| S13 | 74 | 70 | 84 |
| S14 | 60 | 63 | 67 |