| Literature DB >> 35884641 |
Zhuozheng Wang1, Zhuo Ma1, Wei Liu1, Zhefeng An2, Fubiao Huang3.
Abstract
Depression is a common but easily misdiagnosed disease when using a self-assessment scale. Electroencephalograms (EEGs) provide an important reference and objective basis for the identification and diagnosis of depression. In order to improve the accuracy of the diagnosis of depression by using mainstream algorithms, a high-performance hybrid neural network depression detection method is proposed in this paper combined with deep learning technology. Firstly, a concatenating one-dimensional convolutional neural network (1D-CNN) and gated recurrent unit (GRU) are employed to extract the local features and to determine the global features of the EEG signal. Secondly, the attention mechanism is introduced to form the hybrid neural network. The attention mechanism assigns different weights to the multi-dimensional features extracted by the network, so as to screen out more representative features, which can reduce the computational complexity of the network and save the training time of the model while ensuring high precision. Moreover, dropout is applied to accelerate network training and address the over-fitting problem. Experiments reveal that the 1D-CNN-GRU-ATTN model has more effectiveness and a better generalization ability compared with traditional algorithms. The accuracy of the proposed method in this paper reaches 99.33% in a public dataset and 97.98% in a private dataset, respectively.Entities:
Keywords: attention mechanism; depression; electroencephalogram (EEG); gated recurrent unit (GRU); one-dimensional convolutional neural network (1D-CNN)
Year: 2022 PMID: 35884641 PMCID: PMC9313113 DOI: 10.3390/brainsci12070834
Source DB: PubMed Journal: Brain Sci ISSN: 2076-3425
Figure 1Position of 16 electrodes in international 10–20 system.
Properties of datasets with data.
| Dataset | Properties | Values |
|---|---|---|
| Public Dataset | Number of subjects | 53 |
| Number of subjects with depression | 24 | |
| Male/female ratio | 33/20 | |
| Number of channels | 128 | |
| Sampling rate (Hz) | 250 | |
| Private Dataset | Number of subjects | 32 |
| Number of subjects with depression | 16 | |
| Male/female ratio | 16/16 | |
| Number of channels | 16 | |
| Sampling rate (Hz) | 250 |
Figure 2EEG preprocessing process.
Figure 3The architecture of CNN.
Figure 4The sliding direction of the filter in 1D-CNN.
Figure 5The structure of the hybrid network.
Tuning of network parameters.
| Model | NFC_1 | NFC_2 | KSC | NNG | ACC | Loss |
|---|---|---|---|---|---|---|
| M1 | 128 | 128 | 3 | 64 | 0.75 | 0.38 |
| M2 | 128 | 128 | 3 | 128 | 0.75 | 0.31 |
| M3 | 128 | 128 | 3 | 256 | 0.79 | 0.30 |
| M4 | 128 | 256 | 3 | 64 | 0.77 | 0.31 |
| M5 | 128 | 256 | 3 | 128 | 0.77 | 0.32 |
| M6 | 128 | 256 | 3 | 256 | 0.75 | 0.29 |
| M7 | 256 | 128 | 3 | 64 | 0.76 | 0.34 |
| M8 | 256 | 128 | 3 | 128 | 0.75 | 0.33 |
| M9 | 256 | 128 | 3 | 256 | 0.75 | 0.38 |
| M10 | 256 | 256 | 3 | 64 | 0.74 | 0.36 |
| M11 | 256 | 256 | 3 | 128 | 0.75 | 0.35 |
| M12 | 256 | 256 | 3 | 256 | 0.75 | 0.28 |
| M13 | 128 | 128 | 5 | 64 | 0.87 | 0.21 |
| M14 | 128 | 128 | 5 | 128 | 0.88 | 0.20 |
| M15 | 128 | 128 | 5 | 256 | 0.89 | 0.19 |
| M16 | 128 | 256 | 5 | 64 | 0.90 | 0.27 |
| M17 | 128 | 256 | 5 | 128 | 0.89 | 0.17 |
| M18 | 128 | 256 | 5 | 256 | 0.92 | 0.16 |
| M19 | 256 | 128 | 5 | 64 | 0.89 | 0.23 |
| M20 | 256 | 128 | 5 | 128 | 0.89 | 0.17 |
| M21 | 256 | 128 | 5 | 256 | 0.91 | 0.19 |
| M22 | 256 | 256 | 5 | 64 | 0.89 | 0.21 |
| M23 | 256 | 256 | 5 | 128 | 0.90 | 0.18 |
| M24 | 256 | 256 | 5 | 256 | 0.91 | 0.17 |
Tuning of hyperparameters.
| Network Parameters | Model | Epoch | Batch Size | Accuracy |
|---|---|---|---|---|
| M18 | M1 | 80 | 128 | 0.9776 |
| M2 | 90 | 128 | 0.9760 | |
| M3 | 100 | 128 | 0.9770 | |
| M4 | 80 | 256 | 0.9780 | |
| M5 | 90 | 256 | 0.9753 | |
| M6 | 100 | 256 | 0.9797 | |
| M21 | M7 | 80 | 128 | 0.9530 |
| M8 | 90 | 128 | 0.9532 | |
| M9 | 100 | 128 | 0.9541 | |
| M10 | 80 | 256 | 0.9523 | |
| M11 | 90 | 256 | 0.9537 | |
| M12 | 100 | 256 | 0.9548 | |
| M24 | M13 | 80 | 128 | 0.9533 |
| M14 | 90 | 128 | 0.9531 | |
| M15 | 100 | 128 | 0.9536 | |
| M16 | 80 | 256 | 0.9540 | |
| M17 | 90 | 256 | 0.9542 | |
| M18 | 100 | 256 | 0.9547 |
Figure 6Depression diagnosis algorithm flow.
The sample distribution of public dataset and private dataset.
| Dataset | Depression | Normal | Samples |
|---|---|---|---|
| MODMA | 7200 | 8700 | 15,900 |
| Private dataset | 4119 | 3114 | 7533 |
Parameters in the diagnosis model of depression.
| Description | Value |
|---|---|
| Number of filters in convolutional layer 1 | 128 |
| Number of filters in convolutional layer 2 | 256 |
| Filter size in convolutional layers 1 and 2 | 5 |
| Pooling size in the max pooling layer | 2 |
| Number of neurons in GRU | 256 |
| Dropout | 0.2 |
| Epoch | 100 |
| Batch size | 256 |
Figure 7Iterative curve of training process.
Figure 8Confusion matrix of the test set.
Classification report.
| Label | Description | Precision | Recall | F1-Score | Support |
|---|---|---|---|---|---|
| 0 | Depression | 1.00 | 0.99 | 0.99 | 713 |
| 1 | Health | 0.99 | 1.00 | 0.99 | 877 |
Parameters of comparison models.
| 1D-CNN | LSTM | GRU | 1D-CNN-LSTM | 1D-CNN-GRU |
|---|---|---|---|---|
| Conv1–5×128 | LSTMcell-256 | GRUcell-256 | Conv1–5 × 128 | Conv1–5 × 128 |
Figure 9Results of comparative experiments on public dataset.
Figure 10Results of comparative experiments on public dataset.
Figure 11Results of comparative experiments on private dataset.
The accuracy, loss, and time of private dataset on different models.
| Model | Accuracy (%) | Loss | Train Time (s) |
|---|---|---|---|
| CNN | 86.38 | 0.33 | 64.38 |
| LSTM | 88.22 | 0.30 | 253.62 |
| GRU | 88.65 | 0.30 | 227.43 |
| 1D-CNN-LSTM | 90.08 | 0.27 | 178.32 |
| 1D-CNN-GRU | 91.43 | 0.26 | 164.57 |
| 1D-CNN-GRU-ATTN | 97.98 | 0.07 | 160.48 |
Figure 12Evaluation indicators of each model.
The comparison of the classification results between the proposed method and previous works.
| Author | Year | Features | Methods | Accuracy (%) |
|---|---|---|---|---|
| Shuting, S. et al. [ | 2020 | Nonlinear + PLI | LR + ReliefF | 81.79 |
| Wang, Y. et al. [ | 2021 | ITD + statistical features | TCN | 85.23 |
| Wang, Y. et al. [ | 2021 | ITD + statistical features | L-TCN | 86.87 |
| This paper | 2022 | PSD of 5 bands | 1D-CNN-GRU-ATTN | 99.33 |