| Literature DB >> 35959025 |
Shoucheng Shen1, Jinling Fan2.
Abstract
Theoretical research into the emotional attributes of ideological and political education can improve our ability to understand human emotion and solve socio-emotional problems. To that end, this study undertook an analysis of emotion in ideological and political education by integrating a gate recurrent unit (GRU) with an attention mechanism. Based on the good results achieved by BERT in the downstream network, we use the long focusing attention mechanism assisted by two-way GRU to extract relevant information and global information of ideological and political education and emotion analysis, respectively. The two kinds of information complement each other, and the accuracy of emotion information can be further improved by combining neural network model. Finally, the validity and domain adaptability of the model were verified using several publicly available, fine-grained emotion datasets.Entities:
Keywords: GRU; deep learning; emotion analysis; feature fusion; political and ideological education
Year: 2022 PMID: 35959025 PMCID: PMC9361775 DOI: 10.3389/fpsyg.2022.908154
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1LSTM neural network structure.
FIGURE 2Gated recurrent unit (GRU).
FIGURE 3Recursive neural network based on gated loop unit GRU. Human image reproduced with permission from the TikTok dataset.
FIGURE 4Overall architecture of E2E-ABSA model integrating GRU and MHA.
FIGURE 5Semantic feature extraction based on the multi-attentional mechanism.
Parameter settings related to the model.
| Related network dimensions and parameter meanings | The specific Settings |
| Number of heads for multiple attention mechanisms | |
| Initial value of BiGRU feature weight parameter μ | μ = 0.5 |
| Initial learning rate of model | Learning_rate = 2e–5 |
| Multiple attention network packet loss rate | Dropout = 0.1 |
| Unidirectional GRU network dimension | Hidden_size = 384 |
| Multidimensional attention network dimensions | dima = 768 |
Specific distributions of semantic aspects in the TikTok datasets.
| Dataset partitioning | Sentence | Aspect | ||
| Positive | Negative | Neutral | ||
| Training set | 1880 | 547 | 219 | 1799 |
| Validation set | 235 | 81 | 24 | 230 |
| Test set | 235 | 75 | 31 | 236 |
| Total | 2350 | 703 | 274 | 2265 |
FIGURE 6Changes in the BERT-BiGRU-MHA model on the TikTok dataset. (A) Micro-FI; (B) Train_Loss.
Comparison between the experimental results of models based on the TikTok dataset.
| Model | Micro-P | Micro-R | Micro-F1 | Macro-Fl |
| BERT-Linear | 0.5969 | 0.5774 | 0.5869 | 0.4491 |
| BERT-BiLSTM | 0.6024 | 0.5952 | 0.5988 | 0.3525 |
| BERT-CRF | 0.6152 | 0.6042 | 0.6096 | 0.4691 |
| BERT-MHA | 0.6399 | 0.5923 | 0.6151 | 0.4691 |
| BERT-BiGRU | 0.6228 | 0.6190 | 0.6208 | 0.4627 |
| BERT-BiGRU-MHA | 0.6449 | 0.6161 | 0.6301 | 0.4846 |
Specific distributions of semantic aspects in the service datasets.
| Dataset partitioning | Sentence | Aspect | ||
| Positive | Negative | Neutral | ||
| Training set | 1343 | 936 | 620 | 100 |
| Validation set | 149 | 98 | 78 | 12 |
| Test set | 747 | 506 | 320 | 61 |
| Total | 2239 | 1540 | 1018 | 173 |
Comparison between the experimental results of models based on the service dataset.
| Model | Micro-P | Micro-R | Micro-F | Macro-Fl |
| BERT-Linear | 0.6428 | 0.6268 | 0.6347 | 0.4778 |
| BERT-BiLSTM | 0.6471 | 0.6589 | 0.6359 | 0.5138 |
| BERT-BiGRU | 0.6570 | 0.6392 | 0.6479 | 0.4878 |
| BERT-BiGRU-MHA | 0.6585 | 0.6347 | 0.6463 | 0.4902 |
Primary experimental results of each model on the Laptop14 and Rest14 datasets.
| Model | Laptop14 | Restl4 | ||||
| Micro-P | Micro-R | Micro-F1 | Micro-P | Micro-R | Macro-Fl | |
| MNN (2021) ( | 0.5966 | 0.5733 | 0.5830 | 0.6678 | 0.6952 | 0.6793 |
| DOER (2021) ( | 0.6012 | 0.5814 | 0.6035 | 0.6752 | 0.7019 | 0.7278 |
| DREGCN-CNN-BERT (2022) ( | 0.6205 | 0.6071 | 0.6304 | 0.6933 | 0.7091 | 0.7260 |
| BERT-Linear | 0.6135 | 0.5883 | 0.6006 | 0.6997 | 0.7196 | 0.7095 |
| BERT-BiGRU | 0.6047 | 0.6104 | 0.6075 | 0.7140 | 0.7312 | 0.7225 |
| BERT-MHA | 0.6231 | 0.5868 | 0.6043 | 0.7397 | 0.7080 | 0.7235 |
| BERT-BiGRU-MHA | 0.6500 | 0.6151 | 0.6320 | 0.7543 | 0.7375 | 0.7458 |
FIGURE 7Comparison of corresponding performance indexes of each model.