| Literature DB >> 32714167 |
Jing Jiang1, Chunhui Wang1, Jinghan Wu2, Wei Qin3,4, Minpeng Xu2, Erwei Yin3,4.
Abstract
Common spatial pattern (CSP) method is widely used for spatial filtering and brain pattern extraction from electroencephalogram (EEG) signals in motor imagery (MI)-based brain-computer interfaces (BCIs). The participant-specific time window relative to the visual cue has a significant impact on the effectiveness of the CSP. However, the time window is usually selected experientially or manually. To solve this problem, we propose a novel feature selection approach for MI-based BCIs. Specifically, multiple time segments were obtained by decomposing each EEG sample of the MI task. Furthermore, the features were extracted by CSP from each time segment and were combined to form a new feature vector. Finally, the optimal temporal combination patterns for the new feature vector were selected based on four feature selection algorithms, i.e., mutual information, least absolute shrinkage and selection operator, principal component analysis and stepwise linear discriminant analysis (denoted as MUIN, LASSO, PCA, and SWLDA, respectively), and the classification algorithm was employed to evaluate the average classification accuracy. With three BCI competition datasets, the results of the four proposed algorithms were compared with traditional CSP algorithm in classification accuracy. Experimental results show that compared with traditional algorithm, the proposed methods significantly improve performance. Specifically, the LASSO achieved the highest accuracy (88.58%) among the proposed methods. Importantly, the average classification accuracies using the proposed approaches significantly improved 10.14% (MUIN), 11.40% (LASSO), 6.08% (PCA), and 10.25% (SWLDA) compared to that using CSP. These results indicate that the proposed approach is expected to be practical in MI-based BCIs.Entities:
Keywords: brain–computer interface (BCI); common spatial pattern (CSP); electroencephalogram (EEG); feature selection; motor imagery (MI); support vector machine (SVM)
Year: 2020 PMID: 32714167 PMCID: PMC7344307 DOI: 10.3389/fnhum.2020.00231
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Figure 1Illustration of temporal combination pattern optimization method.
Figure 2Timeline of one trial in the dataset 1 (subgraph A), 2 (subgraph B), and 3 (subgraph C).
Accuracy (%) and significance comparisons of different methods applied on dataset 1, 2, and 3.
| a | 55.5 | 87.5 | 86.5 | 78.0 | 84.5 |
| b | 66.0 | 82.5 | 83.0 | 78.0 | 82.0 |
| c | 77.5 | 92.0 | 92.0 | 66.0 | 87.0 |
| d | 90.5 | 96.5 | 98.0 | 93.5 | 97.5 |
| e | 92.5 | 100.0 | 100.0 | 98.0 | 98.5 |
| f | 85.5 | 91.5 | 91.0 | 90.5 | 91.5 |
| g | 54.5 | 82.0 | 79.5 | 76.0 | 79.0 |
| Mean ± std | 74.6 ± 14.8 | 90.3 ± 6.3 | 90.0 ± 7.0 | 82.9 ± 10.6 | 88.6 ± 7.0 |
| aa | 80.7 | 81.8 | 83.6 | 80.0 | 83.2 |
| al | 97.5 | 95.7 | 98.9 | 98.9 | 98.9 |
| av | 68.2 | 68.6 | 70.4 | 66.1 | 72.5 |
| aw | 95.7 | 96.8 | 97.1 | 96.4 | 96.8 |
| ay | 92.1 | 92.1 | 96.4 | 93.2 | 96.8 |
| Mean ± std | 86.9 ± 11.0 | 87.0 ± 10.6 | 89.3 ± 10.9 | 86.9 ± 12.3 | 89.6 ± 10.2 |
| k3 | 85.6 | 93.3 | 93.9 | 92.2 | 91.7 |
| k6 | 60.8 | 57.5 | 61.7 | 62.5 | 60.8 |
| l1 | 90.0 | 95.8 | 96.7 | 95.8 | 94.2 |
| Mean ± std | 78.8 ± 12.8 | 82.2 ± 17.5 | 84.1 ± 15.9 | 83.5 ± 14.9 | 82.2 ± 15.2 |
| – | 0.0097 | 0.0016 | 0.048 | 0.002 | |
Figure 3A two-dimensional feature distribution map for each class obtained by using traditional method and the proposed feature selection-based algorithms (i.e., MUIN, LASSO, PCA, SWLDA) in dataset 1 [subjects (A–G)].
Figure 4The bar chart represents the total number of selected time windows for the proposed algorithms in datasets 1–3. (The meaning of time window indexes can be found in Section Multi-time segmenting and temporal band-pass filtering).
Ratio comparison of samples with selected features from different time windows to the total samples.
| a | 0.7 | 0.6 | 0.6 | 0.8 |
| b | 0.2 | 0.5 | 0.9 | 0.3 |
| c | 0.9 | 1 | 0.5 | 1 |
| d | 0 | 0.7 | 0.5 | 0.9 |
| e | 0.3 | 1 | 0.8 | 0.8 |
| f | 1 | 0 | 0.8 | 1 |
| g | 0.7 | 0.7 | 0.9 | 0.8 |
| aa | 0.7 | 1 | 0.9 | 1 |
| al | 1 | 0.8 | 1 | 0.5 |
| av | 0.9 | 1 | 0.7 | 0.5 |
| aw | 0.6 | 0.7 | 0.7 | 0.4 |
| ay | 1 | 1 | 0.7 | 1 |
| k3 | 1 | 1 | 1 | 0.8 |
| k6 | 0.7 | 1 | 0.5 | 0.7 |
| l1 | 0.5 | 1 | 0.9 | 0.7 |
| Mean ± std | 0.68 ± 0.3 | 0.8 ± 0.27 | 0.76 ± 0. 17 | 0.75 ± 0.22 |
Figure 5Time-frequency plots for participant “l1” under 2 MI mission and 3 channels (C3, CZ, and C4). LH and RH indicate left hand, right hand, respectively. Blue indicates ERD.
Figure 6Topographic maps of 2 MI missions from participant “l1”. These graphs are obtained using the ERSP value of every channel and interpolation between the channels. The blue area indicates that an ERD phenomenon occurs in the corresponding brain area when the subject performs the motor imagery task.