| Literature DB >> 35958998 |
Penghai Li1, Jianxian Su1, Abdelkader Nasreddine Belkacem2, Longlong Cheng3, Chao Chen4.
Abstract
Objective: The conventional single-person brain-computer interface (BCI) systems have some intrinsic deficiencies such as low signal-to-noise ratio, distinct individual differences, and volatile experimental effect. To solve these problems, a centralized steady-state visually evoked potential collaborative BCI system (SSVEP-cBCI), which characterizes multi-person electroencephalography (EEG) feature fusion was constructed in this paper. Furthermore, three different feature fusion methods compatible with this new system were developed and applied to EEG classification, and a comparative analysis of their classification accuracy was performed with transfer learning-based convolutional neural network (TL-CNN) approach. Approach: An EEG-based SSVEP-cBCI system was set up to merge different individuals' EEG features stimulated by the instructions for the same task, and three feature fusion methods were adopted, namely parallel connection, serial connection, and multi-person averaging. The fused features were then input into CNN for classification. Additionally, transfer learning (TL) was applied first to a Tsinghua University (THU) benchmark dataset, and then to a collected dataset, so as to meet the CNN training requirement with a much smaller size of collected dataset and increase the classification accuracy. Ten subjects were recruited for data collection, and both datasets were used to gauge the three fusion algorithms' performance. Main results: The results predicted by TL-CNN approach in single-person mode and in multi-person mode with the three feature fusion methods were compared. The experimental results show that each multi-person mode is superior to single-person mode. Within the 3 s time window, the classification accuracy of the single-person CNN is only 90.6%, while the same measure of the two-person parallel connection fusion method can reach 96.6%, achieving better classification effect. Significance: The results show that the three multi-person feature fusion methods and the deep learning classification algorithm based on TL-CNN can effectively improve the SSVEP-cBCI classification performance. The feature fusion method of multi -person parallel feature connection achieves better classification results. Different feature fusion methods can be selected in different application scenarios to further optimize cBCI.Entities:
Keywords: collaborative BCI; convolutional neural network; feature fusion; steady-state visually evoked potential; transfer learning
Year: 2022 PMID: 35958998 PMCID: PMC9360603 DOI: 10.3389/fnins.2022.971039
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
FIGURE 1Centralized cBCI structure designed in this study.
FIGURE 2SSVEP stimulation interface and label reminder method. (A) Stimulus interface. (B) Random tag prompt.
FIGURE 3Three fusion methods: (1) feature parallel connection (2) feature serial connection (3) feature averaging. Single-1 and Single-2 represent two single-person features for feature fusion.
Single-person convolutional neural network structure.
| Layer number | Layer | Filter | Kernel size | Feature size | Activation |
| 1 | Input data | – | – | (3.78) | – |
| 2 | Conv2D | 16 | (3.1) | (3.78) | ReLU |
| 3 | Conv2D | 32 | (3.1) | (1.78) | ReLU |
| 4 | Conv2D | 32 | (1.3) | (1.78) | ReLU |
| 5 | Conv2D | 64 | (1.3) | (1.76) | ReLU |
| 6 | Flatten | – | – | – | – |
| 7 | Dense | 8 | – | – | – |
| 8 | Dropout | – | Rate = 0.5 | – | – |
| 9 | Dense | 4 | – | – | Softmax |
Two-person parallel feature connection CNN structure.
| Layer number | Layer | Filter | Kernel size | Feature size | Activation |
| 1 | Input data | – | – | (6.78) | – |
| 2 | Conv2D | 16 | (6.1) | (6.78) | ReLU |
| 3 | Conv2D | 32 | (3.1) | (4.78) | ReLU |
| 4 | Conv2D | 64 | (3.1) | (2.78) | ReLU |
| 5 | Conv2D | 64 | (2.1) | (1.78) | ReLU |
| 6 | Conv2D | 128 | (1.3) | (1.78) | ReLU |
| 7 | Conv2D | 256 | (1.3) | (1.76) | ReLU |
| 8 | Flatten | – | – | – | – |
| 9 | Dense | 8 | – | – | – |
| 10 | Dropout | – | Rate = 0.5 | – | – |
| 11 | Dense | 4 | – | – | Softmax |
Two-person serial feature connection CNN structure.
| Layer number | Layer | Filter | Kernel size | Feature size | Activation |
| 1 | Input data | – | – | (3.234) | – |
| 2 | Conv2D | 48 | (3.1) | (3.234) | ReLU |
| 3 | Conv2D | 96 | (3.1) | (1.234) | ReLU |
| 4 | Conv2D | 96 | (1.3) | (1.234) | ReLU |
| 5 | Conv2D | 192 | (1.3) | (1.232) | ReLU |
| 6 | Flatten | – | – | – | – |
| 7 | Dense | 8 | – | – | – |
| 8 | Dropout | – | Rate = 0.5 | – | – |
| 9 | Dense | 4 | – | – | Softmax |
FIGURE 4Training strategy.
Classification accuracy comparison of single- and two-person models under different time windows.
| TL-CNN (%) | ||||||||
| 3 s | 2.8 s | 2.6 s | 2.4 s | 2.2 s | 2.0 s | 1.8 s | 1.6 s | |
| S1 | 91.6 | 87.5 | 81.3 | 77.1 | 79.1 | 75 | 70.8 | 60.4 |
| S2 | 89.5 | 85.4 | 87.5 | 83.2 | 77.1 | 66.7 | 66.7 | 66.6 |
| S3 | 97.9 | 91.6 | 91.6 | 89.6 | 91.7 | 83.3 | 77.1 | 70.8 |
| S4 | 79.1 | 68.6 | 77 | 70.8 | 66.7 | 68.6 | 70.8 | 64.5 |
| S5 | 100 | 100 | 97.9 | 95.8 | 95.8 | 93.7 | 87.5 | 75 |
| S6 | 93.7 | 87.5 | 83.3 | 83.3 | 83.4 | 79.2 | 77.1 | 72.9 |
| S7 | 79.1 | 77 | 77.1 | 68.8 | 60.4 | 60.4 | 58.3 | 54.2 |
| S8 | 100 | 100 | 100 | 97.9 | 97.9 | 93.4 | 87.5 | 85.4 |
| S9 | 75 | 64.5 | 68 | 70.9 | 60.4 | 60.4 | 60.4 | 50 |
| S10 | 100 | 97.9 | 97.9 | 93.7 | 91.7 | 91.7 | 72.9 | 72.9 |
| Saverage | 90.6 | 86.4 | 86.2 | 82.4 | 80.4 | 77.3 | 72.9 | 67.3 |
| C1 | 95.8 | 93.3 | 87.5 | 79.2 | 91.6 | 77 | 75.1 | 75 |
| C2 | 89.5 | 93.3 | 95.5 | 85.4 | 81.2 | 79.2 | 81.3 | 75 |
| C3 | 100 | 95.8 | 97.9 | 95.8 | 91.7 | 85.4 | 91.2 | 91.2 |
| C4 | 100 | 100 | 95.8 | 93.7 | 87.5 | 85.4 | 85.4 | 75 |
| C5 | 97.9 | 97.9 | 93.7 | 93.7 | 83.3 | 87.5 | 81.2 | 70.8 |
| Coverage | 96.6 | 96.1 | 94.1 | 89.6 | 87.1 | 83.1 | 82.9 | 77.4 |
FIGURE 5Accuracy and ITR under different time windows. (A) Classification accuracy for different time windows. (B) ITR for different time windows.
FIGURE 6Total classification accuracy of three fusion methods under different number of participants. The line “a” is the total classification accuracy of the single features.