| Literature DB >> 33285871 |
Xingliang Tang1,2, Xianrui Zhang3.
Abstract
Decoding motor imagery (MI) electroencephalogram (EEG) signals for brain-computer interfaces (BCIs) is a challenging task because of the severe non-stationarity of perceptual decision processes. Recently, deep learning techniques have had great success in EEG decoding because of their prominent ability to learn features from raw EEG signals automatically. However, the challenge that the deep learning method faces is that the shortage of labeled EEG signals and EEGs sampled from other subjects cannot be used directly to train a convolutional neural network (ConvNet) for a target subject. To solve this problem, in this paper, we present a novel conditional domain adaptation neural network (CDAN) framework for MI EEG signal decoding. Specifically, in the CDAN, a densely connected ConvNet is firstly applied to obtain high-level discriminative features from raw EEG time series. Then, a novel conditional domain discriminator is introduced to work as an adversarial with the label classifier to learn commonly shared intra-subjects EEG features. As a result, the CDAN model trained with sufficient EEG signals from other subjects can be used to classify the signals from the target subject efficiently. Competitive experimental results on a public EEG dataset (High Gamma Dataset) against the state-of-the-art methods demonstrate the efficacy of the proposed framework in recognizing MI EEG signals, indicating its effectiveness in automatic perceptual decision decoding.Entities:
Keywords: convolutional neural network; domain adaptation; electroencephalogram (EEG); motor imagery (MI); signal classification
Year: 2020 PMID: 33285871 PMCID: PMC7516530 DOI: 10.3390/e22010096
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1The architecture of the CDAN model.
Figure 2Illustration of the Dense ConvNet architecture, where the blue cuboids represent feature maps, and the browns are the kernels of convolution and pooling.
Overall comparison.
| Method | Acc (%) | Precision (%) | Recall (%) | F1-Score |
|---|---|---|---|---|
| FBCSP | 91.2 | 91.6 | 91.2 | 0.914 |
| Shallow ConvNet | 89.3 | 89.5 | 89.3 | 0.894 |
| Deep ConvNet | 92.5 | 92.7 | 92.4 | 0.926 |
| Hybrid ConvNet * | 91.9 | 92.1 | 91.8 | 0.920 |
| Residual ConvNet * | 88.8 | 88.9 | 88.8 | 0.888 |
| DAN | 93.6 | 93.8 | 93.5 | 0.936 |
| CDAN-1 | 94.3 | 94.4 | 94.3 | 0.943 |
| CDAN | 95.3 | 95.2 | 95.3 | 0.952 |
* denotes the experiment results are obtained by our own reimplementation.
Results of all subjects obtained by our methods.
| Subject | DAN | CDAN-1 | CDAN |
|---|---|---|---|
| 1 | 92.4 | 93.7 | 95.0 |
| 2 | 94.2 | 95.3 | 96.3 |
| 3 | 94.5 | 94.9 | 96.3 |
| 4 | 97.0 | 97.4 | 98.1 |
| 5 | 98.6 | 98.6 | 99.4 |
| 6 | 93.5 | 94.5 | 95.0 |
| 7 | 92.3 | 93.2 | 93.7 |
| 8 | 97.7 | 98.4 | 98.8 |
| 9 | 95.8 | 96.8 | 97.5 |
| 10 | 90.3 | 91.2 | 92.5 |
| 11 | 90.9 | 91.3 | 92.5 |
| 12 | 94.1 | 94.7 | 95.6 |
| 13 | 93.4 | 94.3 | 95.6 |
| 14 | 85.3 | 86.2 | 87.5 |
| Average | 93.6 | 94.3 | 95.3 |
Figure 3Confusion matrixes obtained by: (a) FBCSP; (b) Shallow ConvNet; (c) Deep ConvNet; (d) CDAN.
Figure 4The accuracy obtained by CDAN with different hyperparameters including (a) λ; (b) number of output units and (c) kernel length.
Figure 5Feature correlation maps.
Training time of the CDAN model for each subject.
|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|
| 1:28 | 1:28 | 1:59 | 2:18 | 1:58 | 1:41 | 1:39 | 1:36 |
|
| 9 | 10 | 11 | 12 | 13 | 14 | average | |
|
| 1:42 | 1:46 | 2:34 | 2:14 | 2:18 | 2:18 | 1:53 |