| Literature DB >> 33281538 |
Guokai Zhang1, Le Yang2, Boyang Li2, Yiwen Lu3, Qinyuan Liu3, Wei Zhao4, Tianhe Ren5, Junsheng Zhou5, Shui-Hua Wang6,7, Wenliang Che8.
Abstract
Epilepsy is a prevalent neurological disorder that threatens human health in the world. The most commonly used method to detect epilepsy is using the electroencephalogram (EEG). However, epilepsy detection from the EEG is time-consuming and error-prone work because of the varying levels of experience we find in physicians. To tackle this challenge, in this paper, we propose a multi-scale non-local (MNL) network to achieve automatic EEG signal detection. Our MNL-Network is based on 1D convolution neural network involving two specific layers to improve the classification performance. One layer is named the signal pooling layer which incorporates three different sizes of 1D max-pooling layers to learn the multi-scale features from the EEG signal. The other one is called a multi-scale non-local layer, which calculates the correlation of different multi-scale extracted features and outputs the correlative encoded features to further enhance the classification performance. To evaluate the effectiveness of our model, we conduct experiments on the Bonn dataset. The experimental results demonstrate that our MNL-Network could achieve competitive results in the EEG classification task.Entities:
Keywords: EEG; convolution neural network; epilepsy; ictal; interictal; multi-scale; non-local; seizure
Year: 2020 PMID: 33281538 PMCID: PMC7705239 DOI: 10.3389/fnins.2020.00870
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1The data samples from the Bonn dataset.
Figure 2The overview of our MNL-Network. The main backbone is based on a 1D convolution neural network with two additional layers (i.e., signal pooling layer and multi-scale non-local layer), which are developed to learn multi-scale and correlative features from the EEG signals.
The parameters of the MNL-Network.
| Convolution layer | 20 | 40 | – | 178 × 20 |
| BN layer | – | – | – | 178 × 20 |
| Max-pooling layer | – | 2 | 2 | 89 × 20 |
| Convolution layer | 40 | 20 | 2 | 35 × 40 |
| BN layer | – | – | – | 35 × 40 |
| Max-pooling layer | – | 2 | 2 | 17 × 40 |
| Convolution layer | 80 | 10 | 2 | 4 × 80 |
| BN layer | – | – | – | 4 × 80 |
| Signal pooling layer | – | – | – | 4 × 80 |
| Multi-scale non-local layer | – | – | – | 4 × 80 |
| Flatten | – | – | – | 4 × 80 |
| FC layer | 64 | – | – | 64 |
| FC layer | 32 | – | – | 32 |
| FC layer | 2 | – | – | 5 |
Figure 3The architecture of the signal pooling layer. The M(x)1, M(x)2, M(x)4 denote the max-pooling operations with the sizes of {1, 2, 4}, respectively, and the Z(M(x)1), Z(M(x)2), Z(M(x)4) is the zero-padding layer with sizes of {1, 2, 4} that aim to reshape the feature map resolution to the same size.
Figure 4The architecture of the multi-scale non-local layer. The x is the extracted multi-scale features from the previous layer, ϕ, δ, and ϱ are the output features from Conv 1D, the variable of R denotes the similarity matrix of ϕ and δ, the is gained by multiplying R and ϱ, and the final output of the multi-scale non-local layer is .
The k-10 accuracy performance of double classes classification.
| A-E | 97.82 | 100 | 100 | 99.78 | 100 | 99.78 | 100 | 100 | 100 | 100 | 99.93 |
| B-E | 99.35 | 100 | 100 | 99.35 | 99.78 | 99.57 | 99.78 | 100 | 99.35 | 100 | 99.72 |
| C-E | 98.45 | 99.57 | 99.13 | 99.78 | 99.78 | 99.56 | 99.13 | 98.91 | 99.13 | 100 | 99.35 |
| D-E | 98.70 | 98.48 | 99.57 | 98.91 | 98.48 | 99.35 | 98.70 | 98.70 | 98.70 | 98.04 | 98.76 |
| AB-E | 99.57 | 100 | 99.86 | 99.42 | 99.71 | 99.71 | 99.71 | 100 | 99.57 | 100 | 99.75 |
| AC-E | 99.42 | 99.42 | 99.57 | 99.57 | 99.42 | 99.71 | 99.28 | 99.28 | 99.28 | 99.86 | 99.48 |
| AD-E | 98.99 | 99.42 | 99.13 | 99.28 | 98.84 | 98.99 | 98.70 | 99.28 | 98.99 | 99.42 | 99.10 |
| BC-E | 99.28 | 99.86 | 99.42 | 99.28 | 98.99 | 99.28 | 99.13 | 99.57 | 98.99 | 100 | 99.38 |
| BD-E | 98.84 | 98.84 | 98.99 | 98.84 | 98.26 | 98.99 | 98.99 | 98.99 | 98.16 | 99.57 | 98.84 |
| CD-E | 98.84 | 99.42 | 98.99 | 99.28 | 98.12 | 98.99 | 99.70 | 98.12 | 98.70 | 99.28 | 98.84 |
| ABC-E | 99.57 | 99.67 | 99.78 | 99.24 | 99.78 | 99.67 | 99.57 | 99.78 | 99.57 | 99.24 | 99.59 |
| ABD-E | 99.46 | 99.13 | 99.24 | 99.02 | 99.57 | 99.78 | 99.24 | 99.57 | 99.13 | 99.13 | 99.33 |
| BCD-E | 99.57 | 98.70 | 99.24 | 98.70 | 99.35 | 99.35 | 98.59 | 99.35 | 98.91 | 98.04 | 98.98 |
| ABCD-E | 99.04 | 99.39 | 99.39 | 99.13 | 98.87 | 99.48 | 99.48 | 98.70 | 99.48 | 98.70 | 99.17 |
The overall performance of double classes classification.
| A-E | 99.93 | 98.96 | 99.96 | 99.96 | 99.45 |
| B-E | 99.72 | 97.96 | 99.91 | 99.91 | 98.92 |
| C-E | 99.35 | 96.70 | 99.83 | 99.82 | 98.23 |
| D-E | 98.76 | 97.09 | 98.30 | 98.31 | 97.68 |
| AB-E | 99.75 | 97.87 | 99.98 | 99.96 | 98.90 |
| AC-E | 99.48 | 96.91 | 99.85 | 99.69 | 98.28 |
| AD-E | 99.10 | 97.04 | 99.48 | 98.94 | 97.98 |
| BC-E | 99.38 | 96.61 | 99.89 | 99.78 | 98.16 |
| BD-E | 98.84 | 95.70 | 99.48 | 98.93 | 97.28 |
| CD-E | 98.84 | 95.30 | 99.61 | 99.18 | 97.20 |
| ABC-E | 99.59 | 96.13 | 99.97 | 99.91 | 97.98 |
| ABD-E | 99.33 | 96.57 | 99.58 | 98.72 | 97.62 |
| BCD-E | 98.98 | 94.39 | 99.62 | 98.83 | 96.53 |
| ABCD-E | 99.17 | 94.65 | 99.76 | 99.00 | 96.77 |
The k-10 performance of multiple classes classification.
| A-C-E | 97.34 | 97.78 | 96.81 | 97.68 | 97.00 | 98.12 | 98.16 | 98.02 | 97.20 | 97.68 | 97.58 |
| A-D-E | 97.68 | 97.83 | 98.16 | 97.83 | 97.49 | 97.87 | 98.36 | 97.97 | 97.54 | 97.39 | 97.81 |
| B-C-E | 98.07 | 97.83 | 98.60 | 97.78 | 98.16 | 98.60 | 98.84 | 99.42 | 97.97 | 99.03 | 98.43 |
| B-D-E | 98.45 | 98.74 | 98.79 | 98.31 | 98.74 | 98.36 | 98.65 | 98.74 | 98.55 | 98.84 | 98.62 |
| AB-CD-E | 97.04 | 97.39 | 98.67 | 97.62 | 97.68 | 97.91 | 97.62 | 97.97 | 98.55 | 97.13 | 97.76 |
| A-B-C-D-E | 94.63 | 94.35 | 93.34 | 93.79 | 93.98 | 94.40 | 94.28 | 93.11 | 93.93 | 94.31 | 94.01 |
The overall performance of multiple classes classification.
| A-C-E | 97.58 | 95.54 | 97.77 | 95.54 | 95.54 |
| A-D-E | 97.81 | 95.87 | 97.93 | 95.87 | 95.87 |
| B-C-E | 98.43 | 97.00 | 98.50 | 97.00 | 97.00 |
| B-D-E | 98.62 | 97.14 | 98.57 | 97.14 | 97.14 |
| AB-CD-E | 97.76 | 95.95 | 97.97 | 95.95 | 95.95 |
| A-B-C-D-E | 94.01 | 83.49 | 83.49 | 95.87 | 89.46 |
The overall performance of double classes classification.
| 1-D-LBP + FT/BN | Kaya et al., | 99.50 | ||
| FFT and Decision tree | Polat and Güneş, | 98.70 | ||
| A-E | Wavelet transform | Lee et al., | 98.17 | 99.93 |
| Artificial neural networks | Nigam and Graupe, | 97.50 | ||
| CWT + CNN | Türk and Özerdem, | 99.50 | ||
| Robust CNN | Zhao et al., | 99.11 | ||
| CNN + M-V | Ullah et al., | 99.6 | ||
| B-E | CWT + CNN | Türk and Özerdem, | 99.50 | 99.72 |
| DTCWT + GRNN | Swami et al., | 98.9 | ||
| DWT + NB/KNN | Sharmila, | 99.25 | ||
| CWT + CNN | Türk and Özerdem, | 98.50 | ||
| CCNN + M-V | Ullah et al., | 99.1 | ||
| C-E | DTCWT + GRNN | Swami et al., | 98.7 | 99.35 |
| Robust CNN | Zhao et al., | 99.1 | ||
| P-1D-CNN | Ullah et al., | 98.02 | ||
| TQWT- K-NN Entropy | Abhijit et al., | 98.00 | ||
| CEEMDAN + RF | Jia et al., | 98.00 | ||
| D-E | DTCWT + GRNN | Swami et al., | 98.00 | 98.76 |
| Robust CNN | Zhao et al., | 97.63 | ||
| WPE + SVM | Tawfik et al., | 96.50 | ||
| DWT + NB/KNN | Sharmila, | 99.16 | ||
| AB-E | DTCWT + GRNN | Swami et al., | 99.2 | 99.75 |
| Robust CNN | Zhao et al., | 99.38 | ||
| BC-E | DWT+NB/K-NN | Sharmila, | 98.3 | 99.38 |
| 1-D-LBP + FT/BN | Kaya et al., | 97.00 | ||
| CD-E | DWT + NB/KNN | Sharmila, | 98.75 | 98.84 |
| Robust CNN | Zhao et al., | 98.03 | ||
| 1-D-LBP + FT/BN | Kaya et al., | 95.67 | ||
| A-D-E | LSP-SVM | Tuncer et al., | 95.67 | 97.81 |
| TQWT-QSP + 1N | Aydemir et al., | 99.67 | ||
| LSP-SVM | Tuncer et al., | 93.0 | ||
| A-B-C-D-E | Robust CNN | Zhao et al., | 93.55 | 94.01 |