| Literature DB >> 35103873 |
Yan Li1, Ning Zhong2, David Taniar3, Haolan Zhang4.
Abstract
It has been a challenge for solving the motor imagery classification problem in the brain informatics area. Accuracy and efficiency are the major obstacles for motor imagery analysis in the past decades since the computational capability and algorithmic availability cannot satisfy complex brain signal analysis. In recent years, the rapid development of machine learning (ML) methods has empowered people to tackle the motor imagery classification problem with more efficient methods. Among various ML methods, the Graph neural networks (GNNs) method has shown its efficiency and accuracy in dealing with inter-related complex networks. The use of GNN provides new possibilities for feature extraction from brain structure connection. In this paper, we proposed a new model called MCGNet+, which improves the performance of our previous model MutualGraphNet. In this latest model, the mutual information of the input columns forms the initial adjacency matrix for the cosine similarity calculation between columns to generate a new adjacency matrix in each iteration. The dynamic adjacency matrix combined with the spatial temporal graph convolution network (ST-GCN) has better performance than the unchanged matrix model. The experimental results indicate that MCGNet+ is robust enough to learn the interpretable features and outperforms the current state-of-the-art methods.Entities:
Keywords: Brain–computer interfaces (BCI); Electroencephalography (EEG); Graph convolutional networks
Year: 2022 PMID: 35103873 PMCID: PMC8807751 DOI: 10.1186/s40708-021-00151-3
Source DB: PubMed Journal: Brain Inform ISSN: 2198-4026
Fig. 1The structure generation of EEG data, where the data at the range of time d forms a graph
Fig. 2The overall structure of the proposed model consists of three parts: the feature extraction and the mutual information computation part, the spatial–temporal attention mechanism part and spatial–temporal graph convolution part
Fig. 3The process of generating and updating the adjacency matrix
Fig. 4The distribution of the electrodes in 3D space
The hyper-parameters of the model and their corresponding values
| Hyperparameter | Value |
|---|---|
| Learning rate | 9.6e–4 |
| Learning rate decay | 0 |
| Dropout rate | 0.5 |
| Optimizer | Adam |
| L1, L2 regularization | 0.002, 0.001 |
| Training epochs | 500 |
| Batchsize | 32 |
| Chebyshev polynomial | 2 |
The performance comparison of the state-of-the-art approaches on the SMR dataset
| Model | Accuracy | F1-score | Precision |
|---|---|---|---|
| SVM | 0.3488 | 0.3485 | 0.3486 |
| Deep ConvNet | 0.3507 | 0.3191 | 0.4148 |
| FBCSP | 0.3511 | 0.3366 | 0.3714 |
| RF | 0.4008 | 0.3996 | 0.4004 |
| EEGNet | 0.4616 | 0.4838 | 0.5095 |
| Shallow ConvNet | 0.4857 | 0.4789 | 0.4978 |
| MutualGraphNet (ours) | 0.5190 | 0.5175 | 0.5208 |
| MCGNet |
Fig. 5Performance of the proposed model with different ST-GCN layers
The performance of models for different features
| Model | Feature | Accuracy | F1-score | Precision |
|---|---|---|---|---|
| MCGNet | PSD | 0.2716 | 0.2695 | 0.2726 |
| DSAM | 0.4124 | 0.4049 | 0.4052 | |
| ASM | 0.4039 | 0.3842 | 0.3877 | |
| ASDM | 0.3973 | 0.3881 | 0.3881 | |
| DCAU | 0.4375 | 0.4381 | 0.4435 | |
| DE | ||||
| MutualGraphNet | PSD | 0.2604 | 0.2286 | 0.2595 |
| DSAM | 0.3646 | 0.3523 | 0.3541 | |
| ASM | 0.3815 | 0.3820 | 0.3879 | |
| ASDM | 0.3811 | 0.3777 | 0.3764 | |
| DCAU | 0.4162 | 0.4144 | 0.4191 | |
| DE |
Fig. 6The performance of the proposed model with different kinds of adjacency matrix. RD represents the random, ED represents the Euclidean distance, ME represents Mut_Euclidean, MK denotes the Mut_KNN, MI denotes Mutual Information and MC denotes Mutual_Cos