| Literature DB >> 31849587 |
Hao Wu1, Yi Niu1, Fu Li1, Yuchen Li1, Boxun Fu1, Guangming Shi1, Minghao Dong2.
Abstract
OBJECTIVE: Electroencephalogram (EEG) based brain-computer interfaces (BCI) in motor imagery (MI) have developed rapidly in recent years. A reliable feature extraction method is essential because of a low signal-to-noise ratio (SNR) and time-dependent covariates of EEG signals. Because of efficient application in various fields, deep learning has been adopted in EEG signal processing and has obtained competitive results compared with the traditional methods. However, designing and training an end-to-end network to fully extract potential features from EEG signals remains a challenge in MI. APPROACH: In this study, we propose a parallel multiscale filter bank convolutional neural network (MSFBCNN) for MI classification. We introduce a layered end-to-end network structure, in which a feature-extraction network is used to extract temporal and spatial features. To enhance the transfer learning ability, we propose a network initialization and fine-tuning strategy to train an individual model for inter-subject classification on small datasets. We compare our MSFBCNN with the state-of-the-art approaches on open datasets.Entities:
Keywords: BCI; EEG; convolutional neural networks; deep learning; motor imagery
Year: 2019 PMID: 31849587 PMCID: PMC6901997 DOI: 10.3389/fnins.2019.01275
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1Framework of the proposed MSFBCNN network.
FIGURE 2Proposed MSFBCNN architecture.
Detailed architecture of the proposed network.
| Input | ( | ||||||
| Reshape | (1, | ||||||
| TimeConv1 | (64, 1) | (1, 1) | 64 * | ( | Linear | Same | |
| TimeConv2 | (40, 1) | (1, 1) | 40 * | ( | Linear | Same | |
| TimeConv3 | (26, 1) | (1, 1) | 26 * | ( | Linear | Same | |
| TimeConv4 | (16, 1) | (1, 1) | 16 * | ( | Linear | Same | |
| Concat | (4 * | ||||||
| BatchNorm | 2 * | (4 * | |||||
| SpatialConv | (1, | (1, 1) | C * 4 * | ( | Linear | Valid | |
| BatchNorm | 2 * | ( | |||||
| Non-Linear | ( | Square | |||||
| AveragePool | (75, 1) | (15, 1) | ( | Valid | |||
| Non-Linear | Log | ||||||
| Dropout | ( | ||||||
| Classifier | ( | (1, 1) | Linear | Valid |
Accuracy in intra-subject experiments.
| DeepNet | 66.8 | 83.6 | 84.8 |
| EEGNet | 66.7 | 83.1 | 84.0 |
| ShallowFBCSPNet | 72.3 | 81.5 | 91.6 |
| MSFBCNN | 75.8 | 84.3 | 94.4 |
Accuracy of inter-subject transfer learning.
| DeepNet | 71.9 | 84.1 | 90.9 |
| EEGNet | 69.9 | 83.6 | 88.6 |
| ShallowFBCSPNet | 73.8 | 83.7 | 92.3 |
| MSFBCNN | 75.9 | 84.7 | 94.9 |
Results of fine-tuning on a small number of training samples.
| 10 | 60.0 | 74.8 | 84.7 |
| 20 | 65.6 | 78.9 | 84.9 |
| 50 | 67.6 | 81.8 | 86.8 |
| 100 | 75.0 | 83.1 | 89.3 |
Comparison of EEGNet, DeepNet, T-EEGNet, and T-DeepNet.
| EEGNet | 66.7 | 83.1 | 84.0 |
| T-EEGNet | 70.8 | 85.3 | 91.7 |
| DeepNet | 68.8 | 83.6 | 88.1 |
| T-DeepNet | 70.9 | 84.9 | 92.8 |
Effects of different F and D with comparison to the original.
| MSFBCNN | 10 | 64.5 | 81.0 | |
| 20 | 67.9 | 80.8 | ||
| 50 | 67.2 | 80.0 | ||
| 80 | 66.6 | 79.9 | ||
| 0.5 | 64.4 | 80.5 | ||
| 2 | 65.3 | 80.0 | ||
| 75.8 | 84.3 | |||
| EEGNet | 4 | 61.5 | 79.2 | |
| 16 | 63.6 | 77.1 | ||
| 32 | 58.4 | 75.0 | ||
| 40 | 56.2 | 72.0 | ||
| 1 | 63.5 | 78.1 | ||
| 4 | 55.4 | 76.8 | ||
| 66.7 | 83.1 | |||
FIGURE 3Features map after extracting temporal feature on HGD. The x-axis denotes time, and the y-axis denotes the channel.