| Literature DB >> 32908576 |
Kai Zhang1,2, Guanghua Xu1,2, Longtin Chen1,2, Peiyuan Tian1,2, ChengCheng Han1,2, Sicong Zhang1, Nan Duan1.
Abstract
In the process of brain-computer interface (BCI), variations across sessions/subjects result in differences in the properties of potential of the brain. This issue may lead to variations in feature distribution of electroencephalogram (EEG) across subjects, which greatly reduces the generalization ability of a classifier. Although subject-dependent (SD) strategy provides a promising way to solve the problem of personalized classification, it cannot achieve expected performance due to the limitation of the amount of data especially for a deep neural network (DNN) classification model. Herein, we propose an instance transfer subject-independent (ITSD) framework combined with a convolutional neural network (CNN) to improve the classification accuracy of the model during motor imagery (MI) task. The proposed framework consists of the following steps. Firstly, an instance transfer learning based on the perceptive Hash algorithm is proposed to measure similarity of spectrogram EEG signals between different subjects. Then, we develop a CNN to decode these signals after instance transfer learning. Next, the performance of classifications by different training strategies (subject-independent- (SI-) CNN, SD-CNN, and ITSD-CNN) are compared. To verify the effectiveness of the algorithm, we evaluate it on the dataset of BCI competition IV-2b. Experiments show that the instance transfer learning can achieve positive instance transfer using a CNN classification model. Among the three different training strategies, the average classification accuracy of ITSD-CNN can achieve 94.7 ± 2.6 and obtain obvious improvement compared with a contrast model (p < 0.01). Compared with other methods proposed in previous research, the framework of ITSD-CNN outperforms the state-of-the-art classification methods with a mean kappa value of 0.664.Entities:
Mesh:
Year: 2020 PMID: 32908576 PMCID: PMC7474754 DOI: 10.1155/2020/1683013
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Figure 1A diagram representing the (a) subject-dependent (SD) and (b) subject -independent (SI) training strategy.
Figure 2A diagram representing the instance transfer subject-dependent (ITSD) training strategy.
Figure 3Diagram of a trial and timings.
Figure 4Spectrogram images with 3 electrodes after STFT.
Figure 5Transfer weight calculation using perceptive hash algorithm.
Figure 6The structure of convolutional neural network for classification.
Detailed architecture for the CNN.
| Layers | Type | Size | Stride | Output dimension | Activation | Mode |
|---|---|---|---|---|---|---|
| Input | 1 | (64,64,3) | Valid | |||
| Convolution | 2 | 3 × 3 | (1, 1) | (64,64,8) | RELU | |
| Max pooling | 3 | 2 × 2 | (32,32,8) | |||
| Convolution | 4 | 3 × 3 | (32,32,8) | |||
| Max pooling | 5 | 2 × 2 | (16,16,8) | |||
| Dense | 6 | (10, 1) | ||||
| Dense | 7 | (2, 1) | Softmax |
Figure 7Subject-dependent training strategy.
Figure 8Subject-independent training strategy.
Figure 9Instance transfer subject-dependent training strategy.
Size of dataset for three training strategy.
| SI-CNN | SD-CNN | ITSD-CNN | |
|---|---|---|---|
| Training data | 5760 | 648 | 648+transfer instance |
| Test data | 720 | 72 | 72 |
Classification accuracy of different training strategies.
| Subjects | Accuracy % (mean ± std.dev.) | ||
|---|---|---|---|
| SI-CNN | SD-CNN | ITSD-CNN | |
| 1 | 82.3 ± 2.3 | 80.0 ± 2.9 | 93.2 ± 2.1 |
| 2 | 79.5 ± 5.1 | 76.7 ± 3.1 | 96.7 ± 3.2 |
| 3 | 63.8 ± 5.8 | 66.7 ± 3.2 | 94.1 ± 6.1 |
| 4 | 76.5 ± 3.2 | 83.3 ± 3.2 | 97.2 ± 1.0 |
| 5 | 79.8 ± 3.2 | 86.7 ± 2.9 | 92.7 ± 2.7 |
| 6 | 76.2 ± 4.7 | 79.0 ± 3.8 | 95.6 ± 2.5 |
| 7 | 77.6 ± 3.5 | 83.3 ± 2.1 | 94.9 ± 3.1 |
| 8 | 78.9 ± 2.8 | 83.3 ± 4.5 | 96.1 ± 0.8 |
| 9 | 81.3 ± 3.7 | 86.7 ± 3.5 | 92.2 ± 2.0 |
| Ave | 77.3 ± 3.8 | 80.6 ± 3.2 | 94.7 ± 2.6 |
Figure 10The ANOVA stats of classification accuracy for the compared model.
The review of classifiers performance for BCI competition IV dataset 2b.
| Method | Researcher | Mean kappa value |
|---|---|---|
| FBCSP | Ang et al. [ | 0.502 |
| Twin SVM | Soman and Jayadeva [ | 0.526 |
| CNN-SAE | Tabar and Halici [ | 0.547 |
| CSCNN | Rong et al. [ | 0.663 |
| NCA + DTCWT | Malan and Sharma [ | 0.615 |
| CNN-VAE | Dai et al. [ | 0.564 |
| ITSD-CNN | Our method | 0.664 |
Figure 11The ANOVA stats of the mean kappa for existing methods.