| Literature DB >> 35846596 |
Abstract
This study mainly focuses on the emotion analysis method in the application of psychoanalysis based on sentiment recognition. The method is applied to the sentiment recognition module in the server, and the sentiment recognition function is effectively realized through the improved convolutional neural network and bidirectional long short-term memory (C-BiL) model. First, the implementation difficulties of the C-BiL model and specific sentiment classification design are described. Then, the specific design process of the C-BiL model is introduced, and the innovation of the C-BiL model is indicated. Finally, the experimental results of the models are compared and analyzed. Among the deep learning models, the accuracy of the C-BiL model designed in this study is relatively high irrespective of the binary classification, the three classification, or the five classification, with an average improvement of 2.47% in Diary data set, 2.16% in Weibo data set, and 2.08% in Fudan data set. Therefore, the C-BiL model designed in this study can not only successfully classify texts but also effectively improve the accuracy of text sentiment recognition.Entities:
Keywords: bidirectional long short-term memory; convolutional neural networks; emotion analysis; sentiment classification; text sentiment recognition
Year: 2022 PMID: 35846596 PMCID: PMC9280270 DOI: 10.3389/fpsyg.2022.852242
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1Realization process of the emotion recognition method.
FIGURE 2Structure of the C-BiL model.
Data set information.
| Dataset | C | Train/Dev/Test | Len |
| 3 | 80000/10000/10000 | 568 | |
| Fudan | 20 | 8823/981/9832 | 2981 |
| Diary | 5 | 1900/200/200 | 540 |
Parameters settings for the C-BiL model.
| Adjustable parameter | The value selected |
| Word vector dimension | 300 |
| WA, bA, | Random is uniformly distributed initially (−0.1,0.1) |
| WB, bB, | Random is uniformly distributed initially (−0.1,0.1) |
| CNN(A) convolution kernel function | ReLu |
| CNN(B) convolution kernel function | ReLu |
| CNN(A) filter sliding window size | 300*2 |
| CNN(B) filter sliding window size | 600*2 |
| CNN(A) number of filters | 300 |
| CNN(B) number of filters | 300 |
| Random update parameter ratio | 0.5 |
| Number of model training iterations | 50 |
| Number of LSTM(A)’ s layers | 1 |
| Number of LSTM(b)’ s layers | 1 |
| Number of LSTM(A)’ s hidden units | 300 |
| Number of LSTM(B)’s hidden units | 300 |
Accuracy of dichotomies.
| Model comparison | Fudan | Diary | |
| SFCNN (Benchmark model_1) | 93.79% | 93.04% | 94.16% |
| Bi-LSTM (Benchmark model_2) | 94.83% | 94.60% | 94.97% |
| CNN-BiLSTM (Benchmark model_3) | 94.96% | 94.74% | 95.04% |
| BiLSTM-CNN (Benchmark model_4) | 94.89% | 94.83% | 95.19% |
| C-BiL (The proposed model) | 95.66% | 95.77% | 96.12% |
Accuracy rate under three categories.
| Model comparison | Fudan | Diary | |
| SFCNN (Benchmark model_1) | 82.34% | 81.93% | 83.51% |
| Bi-LSTM (Benchmark model_2) | 82.56% | 82.07% | 83.73% |
| CNN-BiLSTM (Benchmark model_3) | 82.16% | 81.57% | 83.98% |
| BiLSTM-CNN (Benchmark model_4) | 83.63% | 82.30% | 84.07% |
| C-BiL (The proposed model) | 83.94% | 83.64% | 84.89% |
Accuracy rate under five categories.
| Model comparison | Fudan | Diary |
| SFCNN (Benchmark model_1) | 54.90% | 56.07% |
| Bi-LSTM (Benchmark model_2) | 54.72% | 55.93% |
| CNN-BiLSTM (Benchmark model_3) | 54.96% | 56.29% |
| BiLSTM-CNN (Benchmark model_4) | 55.17% | 56.34% |
| C-BiL (The proposed model) | 56.45% | 57.56% |