| Literature DB >> 35530739 |
Lei Zhu1, Qifeng Hu1, Junting Yang1, Jianhai Zhang1, Ping Xu1, Nanjiao Ying1.
Abstract
In brain-computer interface (BCI), feature extraction is the key to the accuracy of recognition. There is important local structural information in the EEG signals, which is effective for classification; and this locality of EEG features not only exists in the spatial channel position but also exists in the frequency domain. In order to retain sufficient spatial structure and frequency information, we use one-versus-rest filter bank common spatial patterns (OVR-FBCSP) to preprocess the data and extract preliminary features. On this basis, we conduct research and discussion on feature extraction methods. One-dimensional feature extraction methods like linear discriminant analysis (LDA) may destroy this kind of structural information. Traditional manifold learning methods or two-dimensional feature extraction methods cannot extract both types of information at the same time. We introduced the bilinear structure and matrix-variate Gaussian model into two-dimensional discriminant locality preserving projection (2DDLPP) algorithm and decompose EEG signals into spatial and spectral parts. Afterwards, the most discriminative features were selected through a weight calculation method. We tested the method on BCI competition data sets 2a, data sets IIIa, and data sets collected by our laboratory, and the results were expressed in terms of recognition accuracy. The cross-validation results were 75.69%, 70.46%, and 54.49%, respectively. The average recognition accuracy of new method is improved by 7.14%, 7.38%, 4.86%, and 3.8% compared to those of LDA, two-dimensional linear discriminant analysis (2DLDA), discriminant locality property projections (DLPP), and 2DDLPP, respectively. Therefore, we consider that the proposed method is effective for EEG classification.Entities:
Mesh:
Year: 2021 PMID: 35530739 PMCID: PMC9071993 DOI: 10.1155/2021/6668859
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Operation steps of FBCSP. Each trial in matrix feature contains spatial-spectral information.
Figure 2The flow chart of the experiment.
The pseudocode for training the B2DDLPP feature extractor.
| Algorithm: B2DDLPP |
|---|
| Inputs: |
| - Training sample |
| Outputs: |
| - The feature extraction operators |
| Procedure: |
| 1. Calculate the spatial covariance matrix |
| 2. Calculate |
| 3. Calculate the eigenvalues |
| 5. Calculate the feature matrix Y according to ( |
| 6. Choose the |
Cross-validation performance results for different algorithms in Exp.1.
| Feature extraction | Subj.1 (%) | Subj.2 (%) | Subj.3 (%) | Subj.4 (%) | Subj.5 (%) | Subj.6 (%) | Subj.7 (%) | Subj.8 (%) | Subj.9 (%) | Average (%) |
|---|---|---|---|---|---|---|---|---|---|---|
| None | 84.72 m = 3 | 57.70 m = 1 | 87.23 m = 2 | 54.93 m = 4 | 63.81 m = 2 | 50.71 m = 4 | 88.61 m = 1 |
| 81.05 m = 1 | 72.88 |
| LDA | 79.22 m = 1 | 51.78 m = 1 | 79.57 m = 1 | 50.38 m = 1 | 63.94 m = 1 | 44.81 m = 1 | 85.91 m = 1 | 75.04 m = 1 | 74.34 m = 1 | 67.22 |
|
|
|
|
|
|
|
|
|
| ||
| 2DLDA | 83.27 m = 1 | 55.19 m = 2 | 78.44 m = 2 | 50.33 m = 1 | 67.32 m = 2 | 44.56 m = 1 | 86.10 m = 1 | 77.43 m = 2 | 74.97 m = 1 | 68.62 |
|
|
|
|
|
|
|
|
|
| ||
| DLPP | 80.55 m = 2 | 55.56 m = 1 | 84.38 m = 1 | 54.17 m = 2 | 63.20 m = 1 | 50.69 m = 2 | 88.19 m = 1 | 80.55 m = 2 | 80.21 m = 2 | 70.83 |
|
|
|
|
|
|
|
|
|
| ||
| 2DDLPP | 84.03 m = 1 | 56.40 m = 1 | 86.60 m = 1 | 52.71 m = 3 | 65.89 m = 2 | 49.66 m = 2 | 88.97 m = 1 | 81.79 m = 2 | 80.98 m = 1 | 71.89 |
|
|
|
|
|
|
|
|
|
| ||
| B2DDLPP |
|
|
|
|
|
|
| 85.38 m = 4 |
|
|
|
|
|
|
|
|
|
|
|
|
For each method and each subject, optimal m related to FBCSP's output and the optimal dimension (dop) are presented.
Cross-validation performance results for different algorithms in Exp.2.
| Feature Extraction | Subj.1 (% m, | Subj.2 (% m, | Subj.3 (% m, | Average (%) |
|---|---|---|---|---|
| None | 81.67 m = 1 | 56.67 m = 1 | 58.33 m = 2 | 65.56 |
| LDA | 78.89 m = 1, | 58.33 m = 2, | 53.33 m = 2, | 63.52 |
| 2DLDA | 79.89 m = 2, | 58.50 m = 2, | 55.17 m = 2, | 64.52 |
| DLPP | 85.00 m = 1, | 57.50 m = 1, | 52.50 m = 1, | 65.00 |
| 2DDLPP | 85.44 m = 3, | 54.33 m = 1, | 57.67 m = 1, | 65.81 |
| B2DDLPP |
|
|
|
|
For each method and each subject, optimal m related to FBCSP's output and the optimal dimension (dop) are presented.
Cross-validation performance results for different algorithms in Exp.3.
| Feature extraction | Subj.1 (%) | Subj.2 (%) | Subj.3 (%) | Subj.4 (%) | Subj.5 (%) | Subj.6 (%) | Subj.7 (%) | Subj.8 (%) | Subj.9 (%) | Subj.10 (%) | Average (%) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| None | 43.33 m = 1 | 79.67 m = 1 | 58.31 m = 2 | 50.66 m = 2 | 48.33 m = 2 | 41.00 m = 2 | 57.60 m = 2 | 42.36 m = 1 | 39.33 m = 1 | 84 m = 1 | 54.45 |
| LDA | 39.33 m = 1 | 73.67 m = 1 | 50.00 m = 1 | 45.00 m = 1 | 43.34 m = 1 | 40.33 m = 1 | 54.77 m = 1 | 39.67 m = 1 | 39.00 m = 1 | 75.33 m = 1 | 50.04 |
|
|
|
|
|
|
|
|
|
|
| ||
| 2DLDA | 41.13 m = 1 | 72.33 m = 1 | 52.67 m = 1 | 47.52 m = 1 | 45.67 m = 1 | 39.83 m = 1 | 54.93 m = 1 | 40.33 m = 1 | 39.13 m = 1 | 77.33 m = 1 | 51.09 |
|
|
|
|
|
|
|
|
|
|
| ||
| DLPP | 41.67 m = 2 | 75.33 m = 1 | 52.67 m = 2 | 46.00 m = 2 | 49.33 m = 2 | 39.33 m = 1 | 56.00 m = 2 | 39.67 m = 2 | 39.67 m = 1 | 81.67 m = 1 | 52.13 |
|
|
|
|
|
|
|
|
|
|
| ||
| 2DDLPP | 41.67 m = 1 | 78.00 m = 1 | 55.33 m = 2 | 47.33 m = 2 | 45.33 m = 1 | 40.67 m = 1 | 55.33 m = 2 | 41.67 m = 2 | 40.33 m = 2 | 79.67 m = 1 | 53.13 |
|
|
|
|
|
|
|
|
|
|
| ||
| B2DDLPP |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
For each method and each subject, optimal m related to FBCSP's output and the optimal dimension (dop) are presented.
Figure 3The results of the test set for Exp.1. The figure shows the accuracy of 9 subjects and the average accuracy of them.
Figure 4The results of the test set for Exp.2.
Figure 5Five-fold cross-validation performance on different number of dimensions on Exp.2: (a) the accuracy of different dimensions in subj.1; (b) the accuracy of different dimensions in subj.2; (c) the accuracy of different dimensions in subj.3.