| Literature DB >> 35782086 |
Qi Li1,2, Yan Wu1, Yu Song1, Di Zhao1, Meiqi Sun1, Zhilin Zhang3, Jinglong Wu3.
Abstract
Background: Electroencephalogram (EEG)-based brain-computer interface (BCI) systems are widely utilized in various fields, including health care, intelligent assistance, identity recognition, emotion recognition, and fatigue detection. P300, the main event-related potential, is the primary component detected by EEG-based BCI systems. Existing algorithms for P300 classification in EEG data usually perform well when tested in a single participant, although they exhibit significant decreases in accuracy when tested in new participants. We attempted to address this lack of generalizability associated with existing classification methods using a novel convolutional neural network (CNN) model developed using logistic regression (LR). Materials andEntities:
Keywords: P300; brain-computer interface; convolutional neural network; electroencephalogram; event-related potential; logistic regression
Year: 2022 PMID: 35782086 PMCID: PMC9243506 DOI: 10.3389/fncom.2022.909553
Source DB: PubMed Journal: Front Comput Neurosci ISSN: 1662-5188 Impact factor: 3.387
FIGURE 1P300 speller paradigms.
FIGURE 2Structure diagrams of the models.
Parameter table for CNN section.
| Layer | Filters | Size | Params | Activation | Feature map size |
| Input(EEG) | (batch_size, 30,90) | ||||
| Input(Spare Feature) | (batch_size,1,T) | ||||
| Embedding[Input(Spare Feature)] | output(1,90) | 90*T | |||
| Concatenate | (batch_size,31,90) | ||||
| Conv2D | 16 | (batch_size,1,7) | 16*1*7 + 16 | Leaky Relu | 31*42*16(step = 2) |
| Conv2D | 32 | (1,7) | 32*1*7*16 + 32 | Leaky Relu | 31*18*32(step = 2) |
| Conv2D | 16 | (1,7) | 16*7*32 + 16 | Leaky Relu | 31*13*16(step = 2) |
| Flatten | (6448,1) | ||||
| Dense |
| Sigmoid |
L, number of cross-feature; N, number of classes.
Parameter table for LR section.
| Layer | Size | Params | Activation |
| Input(EEG) | (30,90) | ||
| Flatten | (2700,1) | ||
| Linear | (1,2700) (1) | 2,701 | |
| Dense |
| Sigmoid |
Test accuracy when training individual participants separately.
| DT | RF | ADB | LR | MLP | SVM | KNN | LDA | LR-CNN | CNN | LSTM | |
| Participant 1 | 0.819 | 0.931 | 0.932 | 0.825 | 0.936 | 0.931 | 0.906 | 0.931 | 0.951 | 0.989 | 0.959 |
| Participant 2 | 0.826 | 0.933 | 0.933 | 0.933 | 0.934 | 0.936 | 0.919 | 0.933 | 0.942 | 0.978 | 0.968 |
| Participant 3 | 0.838 | 0.932 | 0.932 | 0.932 | 0.933 | 0.934 | 0.931 | 0.932 | 0.929 | 0.992 | 0.962 |
| Participant 4 | 0.828 | 0.935 | 0.935 | 0.935 | 0.935 | 0.935 | 0.928 | 0.935 | 0.946 | 0.992 | 0.972 |
| Participant 5 | 0.816 | 0.933 | 0.933 | 0.933 | 0.933 | 0.936 | 0.915 | 0.933 | 0.919 | 0.984 | 0.954 |
| Participant 6 | 0.767 | 0.934 | 0.934 | 0.934 | 0.934 | 0.935 | 0.917 | 0.933 | 0.956 | 0.977 | 0.961 |
| Participant 7 | 0.820 | 0.935 | 0.936 | 0.935 | 0.935 | 0.936 | 0.915 | 0.835 | 0.927 | 0.969 | 0.959 |
| Participant 8 | 0.788 | 0.932 | 0.932 | 0.932 | 0.932 | 0.932 | 0.920 | 0.932 | 0.933 | 0.996 | 0.967 |
DT, decision tree; RF, random forest; ADB, adboost; LR, logistic regression; MLP, multilayer perceptron; SVM, support vector machine; KNN, k-nearest neighbor; LDA, linear discriminant analysis; LR-CNN, logistic regression and convolutional neural network; CNN, convolutional neural network; LSTM, long short-term memory.
Test results for the remaining participant after training using data for the other seven participants.
| DT | RF | ADB | LR | MLP | SVM | KNN | LDA | LR-CNN | CNN | LSTM | |
| Participant 1 | 0.759 | 0.825 | 0.828 | 0.793 | 0.856 | 0.829 | 0.806 | 0.841 | 0.923 | 0.898 | 0.886 |
| Participant 2 | 0.737 | 0.836 | 0.828 | 0.813 | 0.863 | 0.837 | 0.819 | 0.824 | 0.932 | 0.883 | 0.877 |
| Participant 3 | 0.742 | 0.831 | 0.829 | 0.798 | 0.856 | 0.835 | 0.816 | 0.823 | 0.919 | 0.869 | 0.855 |
| Participant 4 | 0.716 | 0.822 | 0.826 | 0.816 | 0.853 | 0.840 | 0.814 | 0.854 | 0.932 | 0.889 | 0.883 |
| Participant 5 | 0.732 | 0.836 | 0.833 | 0.822 | 0.833 | 0.833 | 0.811 | 0.821 | 0.917 | 0.796 | 0.811 |
| Participant 6 | 0.721 | 0.820 | 0.827 | 0.821 | 0.843 | 0.836 | 0.810 | 0.826 | 0.926 | 0.913 | 0.878 |
| Participant 7 | 0.727 | 0.843 | 0.826 | 0.816 | 0.852 | 0.842 | 0.812 | 0.822 | 0.862 | 0.877 | 0.858 |
| Participant 8 | 0.712 | 0.832 | 0.829 | 0.794 | 0.829 | 0.828 | 0.821 | 0.829 | 0.921 | 0.881 | 0.862 |
DT, decision tree; RF, random forest; ADB, adboost; LR, logistic regression; MLP, multilayer perceptron; SVM, support vector machine; KNN, k-nearest neighbor; LDA, linear discriminant analysis; LR-CNN, logistic regression and convolutional neural network; CNN, convolutional neural network; LSTM, long short-term memory.
Test results for the remaining four participants after training using data from four participants (1).
| DT | RF | ADB | LR | MLP | SVM | KNN | LDA | LR-CNN | CNN | LSTM | |
| Participant 1 | 0.656 | 0.811 | 0.808 | 0.763 | 0.816 | 0.817 | 0.785 | 0.824 | 0.913 | 0.839 | 0.822 |
| Participant 5 | 0.707 | 0.806 | 0.798 | 0.773 | 0.823 | 0.815 | 0.801 | 0.815 | 0.926 | 0.828 | 0.802 |
| Participant 6 | 0.773 | 0.831 | 0.793 | 0.787 | 0.808 | 0.820 | 0.800 | 0.818 | 0.915 | 0.779 | 0.798 |
| Participant 8 | 0.703 | 0.802 | 0.806 | 0.801 | 0.824 | 0.833 | 0.796 | 0.820 | 0.909 | 0.818 | 0.802 |
DT, decision tree; RF, random forest; ADB, adboost; LR, logistic regression; MLP, multilayer perceptron; SVM, support vector machine; KNN, k-nearest neighbor; LDA, linear discriminant analysis; LR-CNN, logistic regression and convolutional neural network; CNN, convolutional neural network; LSTM, long short-term memory.
Test results for the remaining four participants after training using data from four participants (4).
| DT | RF | ADB | LR | MLP | SVM | KNN | LDA | LR-CNN | CNN | LSTM | |
| Participant 1 | 0.701 | 0.801 | 0.818 | 0.793 | 0.816 | 0.827 | 0.810 | 0.821 | 0.923 | 0.846 | 0.839 |
| Participant 4 | 0.718 | 0.816 | 0.801 | 0.778 | 0.807 | 0.802 | 0.801 | 0.804 | 0.882 | 0.883 | 0.806 |
| Participant 6 | 0.744 | 0.822 | 0.786 | 0.782 | 0.818 | 0.809 | 0.797 | 0.818 | 0.919 | 0.816 | 0.816 |
| Participant 8 | 0.702 | 0.835 | 0.769 | 0.762 | 0.803 | 0.789 | 0.783 | 0.801 | 0.861 | 0.855 | 0.841 |
DT, decision tree; RF, random forest; ADB, adboost; LR, logistic regression; MLP, multilayer perceptron; SVM, support vector machine; KNN, k-nearest neighbor; LDA, linear discriminant analysis; LR-CNN, logistic regression and convolutional neural network; CNN, convolutional neural network; LSTM, long short-term memory.
FIGURE 3Accuracy and loss figures for training. (A) The accuracy of the test on the remaining 4 subjects after training using the other 4 subjects. (B) The accuracy of the test on the remaining subjects after training using the other 7 subjects. (C) Test accuracy when training individual subjects separately. (D) The loss of the test on the remaining 4 subjects after training using the other 4 subjects. (E) The loss of the test on the remaining subjects after training using the other 7 subjects. (F) Test loss when training individual subjects separately.
Test results for the remaining four participants after training using data from four participants (2).
| DT | RF | ADB | LR | MLP | SVM | KNN | LDA | LR-CNN | CNN | LSTM | |
| Participant 3 | 0.743 | 0.811 | 0.810 | 0.746 | 0.814 | 0.809 | 0.771 | 0.819 | 0.913 | 0.811 | 0.798 |
| Participant 4 | 0.729 | 0.815 | 0.805 | 0.791 | 0.817 | 0.822 | 0.809 | 0.822 | 0.873 | 0.832 | 0.822 |
| Participant 7 | 0.716 | 0.824 | 0.803 | 0.814 | 0.816 | 0.821 | 0.818 | 0.827 | 0.857 | 0.841 | 0.836 |
| Participant 8 | 0.704 | 0.811 | 0.806 | 0.801 | 0.816 | 0.836 | 0.766 | 0.808 | 0.861 | 0.856 | 0.843 |
DT, decision tree; RF, random forest; ADB, adboost; LR, logistic regression; MLP, multilayer perceptron; SVM, support vector machine; KNN, k-nearest neighbor; LDA, linear discriminant analysis; LR-CNN, logistic regression and convolutional neural network; CNN, convolutional neural network; LSTM, long short-term memory.
Test results for the remaining four participants after training using data from four participants (3).
| DT | RF | ADB | LR | MLP | SVM | KNN | LDA | LR-CNN | CNN | LSTM | |
| Participant 2 | 0.750 | 0.801 | 0.805 | 0.820 | 0.808 | 0.824 | 0.862 | 0.821 | 0.933 | 0.812 | 0.801 |
| Participant 3 | 0.707 | 0.823 | 0.804 | 0.766 | 0.801 | 0.821 | 0.801 | 0.810 | 0.912 | 0.833 | 0.826 |
| Participant 4 | 0.724 | 0.804 | 0.833 | 0.793 | 0.823 | 0.809 | 0.796 | 0.803 | 0.929 | 0.815 | 0.833 |
| Participant 7 | 0.717 | 0.829 | 0.806 | 0.817 | 0.811 | 0.815 | 0.807 | 0.834 | 0.882 | 0.822 | 0.812 |
DT, decision tree; RF, random forest; ADB, adboost; LR, logistic regression; MLP, multilayer perceptron; SVM, support vector machine; KNN, k-nearest neighbor; LDA, linear discriminant analysis; LR-CNN, logistic regression and convolutional neural network; CNN, convolutional neural network; LSTM, long short-term memory.
FIGURE 4Box plots of test accuracy for different methods in the first set of experiments.
FIGURE 5Box plots of test accuracy for different methods in the second set of experiments.
FIGURE 6Box plots of test accuracy for different methods in the third set of experiments.
FTRL with L1 and L2 regularization.
| 1. |
| 2. |
| 3. |
| 4. |
| 5. |
| 6. |
| 7. |
| 8. |
| 9. |
| 10. |
| 11. |