| Literature DB >> 35615273 |
Xiangmin Lun1, Jianwei Liu1, Yifei Zhang1, Ziqian Hao2, Yimin Hou1.
Abstract
Brain-computer interface (BCI) based on motor imagery (MI) can help patients with limb movement disorders in their normal life. In order to develop an efficient BCI system, it is necessary to decode high-accuracy motion intention by electroencephalogram (EEG) with low signal-to-noise ratio. In this article, a MI classification approach is proposed, combining the difference in EEG signals between the left and right hemispheric electrodes with a dual convolutional neural network (dual-CNN), which effectively improved the decoding performance of BCI. The positive and inverse problems of EEG were solved by the boundary element method (BEM) and weighted minimum norm estimation (WMNE), and then the scalp signals were mapped to the cortex layer. We created nine pairs of new electrodes on the cortex as the region of interest. The time series of the nine electrodes on the left and right hemispheric are respectively used as the input of the dual-CNN model to classify four MI tasks. The results show that this method has good results in both group-level subjects and individual subjects. On the Physionet database, the averaged accuracy on group-level can reach 96.36%, while the accuracies of four MI tasks reach 98.54, 95.02, 93.66, and 96.19%, respectively. As for the individual subject, the highest accuracy is 98.88%, and its four MI accuracies are 99.62, 99.68, 98.47, and 97.73%, respectively.Entities:
Keywords: brain-computer interface (BCI); convolutional neural network (CNN); electroencephalography (EEG); motor imagery (MI); weighted minimum norm estimation (WMNE)
Year: 2022 PMID: 35615273 PMCID: PMC9124859 DOI: 10.3389/fnins.2022.865594
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
Figure 1The framework of the proposed approach.
Figure 2Cortex preprocessing.
Proposed CNN architecture.
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|
|
|
| |||||
| L1 | Input | 1,280×9 | 1 | - | - | 640×9,640×9 |
| L2 | Conv_L1,Conv_R1 | 640×9 | 25 | 11×9×25 | - | 630×1×25 |
| L3 | Pool_L1, Pool_R1 | 630×1×25 | 25 | - | 3×1 | 210×1×25 |
| L4 | Conv_L2,Conv_R2 | 210×1×25 | 50 | 11×1×50 | - | 200××50 |
| L5 | Pool_L2, Pool_R2 | 200×1×50 | 50 | - | 3×1 | 66×1×50 |
| L6 | Conv_L3,Conv_R3 | 66×1×50 | 100 | 11×1×100 | - | 56×1×100 |
| L7 | Pool_L3, Pool_R3 | 56×1×100 | 100 | - | 3×1 | 18×1×100 |
| L8 | Conv_L4,Conv_R4 | 18×1×100 | 200 | 11×1×200 | - | 8×1×200 |
| L9 | Pool_L4, Pool_R4 | 8×1×200 | 200 | - | 2×1 | 4×1×200 |
| L10 | Flatten_L,Flatten_R | 4×1×200 | 1 | - | - | 800 |
| L11 | Flatten_L-Flatten_R | 800 | 1 | - | - | 800 |
| L12 | FC | 800 | 1 | - | - | 128 |
| L13 | Softmax | 128 | 1 | - | - | 4 |
The classification accuracy of individual subject.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
| S1 | 97.77 | 99.67 | 97.14 | 96.08 | 98.18 |
| S2 | 97.30 | 99.59 | 96.00 | 96.45 | 97.17 |
| S3 | 96.35 | 99.73 | 96.90 | 95.89 | 92.86 |
| S4 | 98.88 | 99.62 | 99.68 | 98.47 | 97.73 |
| S5 | 97.14 | 99.56 | 98.15 | 91.49 | 99.34 |
| S6 | 97.61 | 98.93 | 97.56 | 95.91 | 98.04 |
| S7 | 96.23 | 99.14 | 93.18 | 96.45 | 96.15 |
| S8 | 96.33 | 99.92 | 99.37 | 90.91 | 95.12 |
| S9 | 97.34 | 99.81 | 97.44 | 95.83 | 96.27 |
| S10 | 98.81 | 99.74 | 97.56 | 99.56 | 98.36 |
Figure 3Performance comparison of 10 subjects. (A) Accuracy comparison. (B) Receiver operating characteristic (ROC) curve comparison.
Figure 4Classification performance of 10 subjects. (A) Evaluation metrics. (B) Confusion matrix for the accuracy of 4 motor imagery (MI) tasks.
Performance comparison of different convolutional neural network (CNN) models.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
| Proposed model | 96.36 | 95.23 | 96.62 | 96.27 | 96.44 |
| Model without dropout | 94.06 | 90.74 | 94.32 | 93.82 | 94.07 |
| Model without BN | 90.77 | 89.02 | 90.51 | 91.20 | 90.85 |
| Model without | 86.39 | 82.24 | 86.77 | 86.13 | 86.45 |
| dropout & BN |
Figure 5Performance comparison of different models. (A) Accuracy comparison. (B) ROC curve comparison.
Figure 6The loss function curve on test data. (A) Loss function comparisons of 10 individual subjects. (B) Loss function comparisons of different classification models.
Performance comparison with other studies.
|
|
|
|
|
|---|---|---|---|
| Azimirad et al. ( | Global | 81.00 | SVM |
| Dose et al. ( | Global | 80.38 | CNN |
| Subject | 86.49 | ||
| Athif and Ren ( | Global | 64.00 | CSP |
| Hou et al. ( | Global | 94.54 | ESI + CNN |
| Subject | 94.50 | ||
| Handiru and Prasad ( | Global | 61.01 | SVM |
| This work | Global | 96.38 | CNN |
| Subject | 98.88 |