| Literature DB >> 34977351 |
Vaibhav Bhat1, Anita Yadav1, Sonal Yadav1, Dhivya Chandrasekaran1, Vijay Mago1.
Abstract
Emotion recognition in conversations is an important step in various virtual chatbots which require opinion-based feedback, like in social media threads, online support, and many more applications. Current emotion recognition in conversations models face issues like: (a) loss of contextual information in between two dialogues of a conversation, (b) failure to give appropriate importance to significant tokens in each utterance, (c) inability to pass on the emotional information from previous utterances. The proposed model of Advanced Contextual Feature Extraction (AdCOFE) addresses these issues by performing unique feature extraction using knowledge graphs, sentiment lexicons and phrases of natural language at all levels (word and position embedding) of the utterances. Experiments on emotion recognition in conversations datasets show that AdCOFE is beneficial in capturing emotions in conversations.Entities:
Keywords: Chatbots; Dyadic conversation; Emotion recognition
Year: 2021 PMID: 34977351 PMCID: PMC8670389 DOI: 10.7717/peerj-cs.786
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
Distribution of labels in the IEMOCAP dataset.
| Emotion | Count |
|---|---|
| Happy | 376 |
| Sad | 764 |
| Neutral | 1,080 |
| Angry | 749 |
| Excited | 520 |
| Frustrated | 1,210 |
Distribution of labels in the MELD dataset.
| Emotion | Count |
|---|---|
| Anger | 1,500 |
| Disgust | 364 |
| Fear | 338 |
| Joy | 2,312 |
| Neutral | 5,960 |
| Sadness | 876 |
| Surprise | 1,490 |
Figure 1Proposed model architecture.
Figure 2Example of the ConceptNet graph.
Figure 3Example of ConceptNet contextual enrichment.
Pseudo-code for proposed model.
| 1 |
| // |
| 2 |
| 3 |
| 4 |
| /* |
| 5 |
| 6 |
| /* |
| 7 |
| 8 |
| 9 |
| 10 |
| // |
| 11 |
| 12 |
| // |
| 13 |
| 14 |
| // |
| 15 |
Comparison of performance of proposed model with baseline models in terms of accuracy and F1-score.
| Methods | Happy | Sad | Neutral | Angry | Excited | Frustrated | Average | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | Acc | F1 | |
| CNN | 27.77 | 29.86 | 57.14 | 53.83 | 34.33 | 40.14 | 61.17 | 52.44 | 46.15 | 50.09 | 62.99 | 55.75 | 48.92 | 48.18 |
| Memnet | 25.72 | 33.53 | 55.53 | 61.77 | 58.12 | 52.84 | 59.32 | 55.39 | 51.50 | 58.30 | 62.70 | 59.00 | 55.72 | 55.10 |
| CMN | 25.00 | 30.38 | 55.92 | 62.41 | 52.86 | 52.39 | 61.76 | 59.83 | 55.52 | 60.25 | 71.13 | 60.69 | 56.56 | 56.13 |
| DialogueRNN | 28.47 | 36.61 | 65.31 | 72.40 | 62.50 | 57.21 | 67.65 | 65.71 | 70.90 | 68.61 | 61.68 | 60.80 | 61.80 | 61.51 |
| BiDialogueRNN | 25.69 | 33.18 | 75.10 | 78.80 | 58.59 | 59.21 | 64.71 | 65.28 | 80.27 | 71.86 | 61.15 | 58.91 | 63.40 | 62.75 |
| AdCOFE | 54.94 | 54.84 | 56.69 | 56.64 | 61.73 | 59.68 | 72.71 | 73.04 | 64.11 | 65.00 | 69.67 | 67.12 | 64.51 | 64.72 |
Comparison of F1-score with latest ERC models.
| Model | F1 score |
|---|---|
| EmoGraph | 65.4 |
| AGHMN | 63.5 |
| RGAT | 65.22 |
| AdCOFE | 64.7 |
Accuracy of the model after every step.
| Model | Accuracy | F1 score |
|---|---|---|
| ALBERT | 58.2 | 58.6 |
| ALBERT + Batch Sentences | 58.4 | 58.7 |
| ALBERT + ConceptNet | 62.3 | 62.7 |
| ALBERT + ConceptNet + VADER | 64.1 | 64 |
Comparison of accuracy and weighted F1-score.
| Model | Accuracy | Weighted F1 score |
|---|---|---|
| DialogueRNN | 59.540 | 0.5703 |
| bc-LSTM+Att | 57.5 | 0.5644 |
| KET | – | 0.5818 |
| AdCOFE | 60.2 | 0.5871 |