| Literature DB >> 18274604 |
Dan Zhang1, Yijun Wang, Xiaorong Gao, Bo Hong, Shangkai Gao.
Abstract
For a robust brain-computer interface (BCI) system based on motor imagery (MI), it should be able to tell when the subject is not concentrating on MI tasks (the "idle state") so that real MI tasks could be extracted accurately. Moreover, because of the diversity of idle state, detecting idle state without training samples is as important as classifying MI tasks. In this paper, we propose an algorithm for solving this problem. A three-class classifier was constructed by combining two two-class classifiers, one specified for idle-state detection and the other for these two MI tasks. Common spatial subspace decomposition (CSSD) was used to extract the features of event-related desynchronization (ERD) in two motor imagery tasks. Then Fisher discriminant analysis (FDA) was employed in the design of two two-class classifiers for completion of detecting each task, respectively. The algorithm successfully provided a way to solve the problem of "idle-state detection without training samples." The algorithm was applied to the dataset IVc from BCI competition III. A final result with mean square error of 0.30 was obtained on the testing set. This is the winning algorithm in BCI competition III. In addition, the algorithm was also validated by applying to the EEG data of an MI experiment including "idle" task.Entities:
Year: 2007 PMID: 18274604 PMCID: PMC1994518 DOI: 10.1155/2007/39714
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Averaged ERD spatial mappings of (a) left hand and (b) right foot in the training set.
Figure 2Most important spatial pattern of (a) left hand and (b) right foot.
Ideal classification results of the three tasks.
| Feature task |
|
| ( |
|
| |||
|
| −1 (ERD in A1) | −1 (no ERD in A2) | −1 (−1/−1) |
|
| +1 (no ERD in A1) | +1 (ERD in A2) | +1 (+1/+1) |
|
| +1 (no ERD in A1) | −1 (no ERD in A2) | 0 (+1/−1) |
“(f + f )/2” represents the mean value of two outputs corresponding to f and f in the same row.
Figure 3Flow chart of our algorithm.
Figure 4Classification process of Step 1.
Figure 5Classification process of Step 2.
Figure 6Distribution of classification results with respect to the three true labels.
Performance measures of subject FL corresponding to different P values.
|
|
|
|
|
|
| |||
| 100% | 100.0 ± 0.0% | 0.0 ± 0.0% | 89.0 ± 2.3% |
| 95% | 96.1 ± 1.8% | 4.2 ± 1.2% | 94.9 ± 1.8% |
| 90% | 90.0 ± 1.6% | 61.2 ± 2.1% | 96.8 ± 1.1% |
| 85% | 84.2 ± 2.3% | 71.0 ± 3.2% | 97.2 ± 2.5% |
| 80% | 74.1 ± 1.9% | 81.4 ± 1.8% | 96.6 ± 2.1% |
| 75% | 65.3 ± 2.2% | 91.0 ± 1.6% | 97.6 ± 1.4% |
| 70% | 62.7 ± 3.2% | 95.5 ± 0.9% | 98.9 ± 0.8% |
| 65% | 51.8 ± 2.0% | 98.1 ± 2.2% | 98.7 ± 1.0% |
| 60% | 45.1 ± 1.6% | 99.6 ± 0.9% | 99.3 ± 1.2% |
Performance measures of three subjects with the optimal P values.
|
|
|
|
|
|
|
| ||||
| ZYJ | 90% | 78.2 ± 1.7% | 90.2 ± 1.3% | 98.3 ± 1.2% |
| FL | 70% | 62.7 ± 3.2% | 95.5 ± 0.9% | 98.9 ± 0.8% |
| ZD | 80% | 61.2 ± 2.2% | 96.1 ± 1.1% | 99.4 ± 0.4% |
Figure 7Averaged spatial mapping of relax (calculated in a same way as in Figure 1) in the testing set.
Figure 8Distribution of classification results in Step 1.