| Literature DB >> 34611209 |
Yanan Sun1, Dongju Guo2, Jiali Xu3, Yuandong Wang4, Jincheng Li4, Han Li4, Gege Dong4, Fenqi Rong1, Fangzhou Xu5, Yunjing Miao1, Jiancai Leng6, Yang Zhang7.
Abstract
Deep learning networks have been successfully applied to transfer functions so that the models can be adapted from the source domain to different target domains. This study uses multiple convolutional neural networks to decode the electroencephalogram (EEG) of stroke patients to design effective motor imagery (MI) brain-computer interface (BCI) system. This study has introduced 'fine-tune' to transfer model parameters and reduced training time. The performance of the proposed framework is evaluated by the abilities of the models for two-class MI recognition. The results show that the best framework is the combination of the EEGNet and 'fine-tune' transferred model. The average classification accuracy of the proposed model for 11 subjects is 66.36%, and the algorithm complexity is much lower than other models.These good performance indicate that the EEGNet model has great potential for MI stroke rehabilitation based on BCI system. It also successfully demonstrated the efficiency of transfer learning for improving the performance of EEG-based stroke rehabilitation for the BCI system.Entities:
Mesh:
Year: 2021 PMID: 34611209 PMCID: PMC8492790 DOI: 10.1038/s41598-021-99114-1
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1The collection process of an experiment.
Figure 2The overall visualization of the EEGNet structure. The line represents the connectivity of the convolution kernel between input and output (called feature map). Where, C is the number of channels, T is the number of sampling points.
Parameter setting of EEGNet structure, = number of temporal filters, = depth multiplier, = number of pointwise filters.
| Block | Layer | Filters | Size | Output | Activation |
|---|---|---|---|---|---|
| 1 | Input | ||||
| Reshape | |||||
| Conv2D | (1, 64) | Linear | |||
| BatchNorm | |||||
| DepthwiseConv2D | Linear | ||||
| BatchNorm | |||||
| Activation | ELU | ||||
| AveragePool2D | (1,4) | ||||
| Dropout | p = 0.25 or p = 0.5 | ||||
| 2 | SeparableConv2D | (1,16) | Linear | ||
| BatchNorm | |||||
| Activation | ELU | ||||
| AveragePool2D | (1,8) | ||||
| Dropout | p = 0.25 or p = 0.5 | ||||
| Flatten | |||||
| Classifier | Dense | max norm = 0.25 | Softmax |
Figure 3The overall average classification accuracy of all models.
The value of each parameter of the model.
| Parameters | Value |
|---|---|
| Learning rate | 0.0001 |
| Dropout | 0.5 |
| Epoch | 100 |
| 4 | |
| 8 | |
| 2 |
Figure 4(a) The highest accuracy of EEGNet for each subject. (b)Average accuracy of the two datasets (health and patients).
Classification results obtained by different ‘fine-tune’ methods.
| Subject | Proposed (%)framework | SVM (%) | LDA (%) |
|---|---|---|---|
| A01 | 75 | 65 | 60 |
| A02 | 75 | 75 | 70 |
| A03 | 65 | 55 | 65 |
| A04 | 55 | 60 | 45 |
| A05 | 65 | 50 | 60 |
| A06 | 70 | 55 | 65 |
| A07 | 70 | 60 | 65 |
| A08 | 65 | 55 | 55 |
| A09 | 55 | 60 | 50 |
| A10 | 65 | 65 | 65 |
| A11 | 70 | 65 | 50 |
| Mean | 66.36 | 60.45 | 59.09 |
Classification results obtained by different ‘fine-tune’ methods.
| Subject | EEGNet_0 (%) | EEGNet_1 (%) | EEGNet_2 (%) |
|---|---|---|---|
| A01 | 65 | 75 | 70 |
| A02 | 60 | 75 | 65 |
| A03 | 55 | 65 | 60 |
| A04 | 55 | 55 | 55 |
| A05 | 60 | 65 | 60 |
| A06 | 65 | 70 | 65 |
| A07 | 65 | 70 | 65 |
| A08 | 60 | 65 | 60 |
| A09 | 55 | 55 | 60 |
| A10 | 55 | 65 | 60 |
| A11 | 55 | 70 | 65 |
| Mean | 59.09 | 66.36 | 62.27 |
FLOPs, Params and times of all models.
| Model Type | FLOPs(G) | Params(M) | Times(s) |
|---|---|---|---|
| EEGNet | 0.0041 | 0.013 | 176 |
| DenseNet | 2.87 | 25.56 | 440 |
| Xception | 5.73 | 7.98 | 396 |
| ResNet50 | 4.11 | 23.83 | 506 |
| VGG16 | 18.11 | 138.36 | 792 |