| Literature DB >> 30410219 |
Kangning Yang1, Shiyu Fu1, Yue Gu1, Shuhong Chen1, Xinyu Li1, Ivan Marsic1.
Abstract
We examine the utility of linguistic content and vocal characteristics for multimodal deep learning in human spoken language understanding. We present a deep multimodal network with both feature attention and modality attention to classify utterance-level speech data. The proposed hybrid attention architecture helps the system focus on learning informative representations for both modality-specific feature extraction and model fusion. The experimental results show that our system achieves state-of-the-art or competitive results on three published multimodal datasets. We also demonstrated the effectiveness and generalization of our system on a medical speech dataset from an actual trauma scenario. Furthermore, we provided a detailed comparison and analysis of traditional approaches and deep learning methods on both feature extraction and fusion.Entities:
Year: 2018 PMID: 30410219 PMCID: PMC6217979
Source DB: PubMed Journal: Proc Conf Assoc Comput Linguist Meet ISSN: 0736-587X
Figure 1:Overall system structure.
Figure 2:Textual feature extraction with attention.
Figure 4:Modality Fusion
Dataset details.
| Dataset | Class | Speaker Independent | Training Set | Testing Set |
|---|---|---|---|---|
| CMU-MOSI | 2 | 93 (74|19) | 1755 | 444 |
| IEMOCAP | 4 | 151(121|30) | 4295 | 1103 |
| MOUD | 2 | 79 (59|20) | 322 | 115 |
| TRS | 7 | 50 (40|10) | 7261 | 1843 |
Detailed comparison on CMU-MOSI (CM) dataset and IEMOCAP (IE) dataset (accuracy percentage). OS* = OpenSmile. COV* = COVAREP. ATFE = proposed attention based textual feature extraction. AAFE = proposed attention based acoustic feature extraction. MAF = modality attention fusion.
| Approach | CM | IE | Approach | CM | IE | Approach | CM | IE |
| BoW+SVM | 65.3 | 53.2 | BoW+SVM | 65.3 | 53.2 | OS*+SVM | 52.9 | 56.4 |
| OS*+SVM | 52.9 | 56.4 | WEV+SVM | 65.4 | 54.7 | COV*+SVM | 51.5 | 52.7 |
| BoW+OS*+SVM | 65.9 | 61.7 | CNNt+SVM | 67.3 | 55.2 | CNNa+SVM | 54.1 | 55.4 |
| CNNt+DF | 69.2 | 57.8 | LSTMt+SVM | 68.2 | 55.7 | LSTMa+SVM | 56.9 | 56.1 |
| CNNa+DF | 57.3 | 59.9 | ATFE+SVM | 72.2 | 61.0 | AAFE+SVM | 57.1 | 59.1 |
| CNNt+CNNa+DF | 71.6 | 64.2 | CNNt+DF | 69.2 | 57.8 | OS*+DF | 56.1 | 58.7 |
| ATFE+DF | 74.5 | 61.8 | LSTMt+DF | 71.2 | 58.2 | COV*+DF | 55.1 | 56.3 |
| AAFE+DF | 60.4 | 62.5 | LSTMa+DF | 58.5 | 60.5 | CNNa+DF | 57.3 | 59.9 |
| ATFE+AAFE+MAF | ATFE+DF | AAFE+DF | ||||||
| Approach | CM | IE | Approach | CM | IE | |||
| Approach | TRS | BoW+OS*+SVM | 65.9 | 61.7 | CNNt+CNNa+SVM | 65.7 | 63.4 | |
| AAFE+DF | 56.5 | BoW+OS*+DF | 67.2 | 63.2 | CNNt+CNNa+DF | 71.6 | 64.2 | |
| ATFE(trans)+DF | 66.8 | BoW+OS*+MAF | 68.7 | 64.7 | CNNt+CNNa+MAF | 72.9 | 66.1 | |
| ATFE(asr)+DF | 47.7 | WEA+COV*+SVM | 65.8 | 62.7 | ATFE+AAFE+SVM | 71.1 | 65.1 | |
| ATFE(trans)+AAFE+DF | 69.4 | WEA+COV*+DF | 67.7 | 64.1 | ATFE+AAFE+DF | 74.8 | 70.5 | |
| ATFE(asr)+AAFE+DF | 58.9 | WEA+COV*+MAF | 68.5 | 64.8 | ATFE+AAFE+MAF | |||
Figure 5:The weighted scores of modality attention. (a) Modality attention scores of different categories on IEMOCAP. (b) Modality attention scores of different datasets.
Proposed system vs previous methods. Acc = accuracy (%). W-F1 = weighted F1 score.
| Approach | CMU-MOSI | IEMOCAP | MOUD | TRS | ||||
|---|---|---|---|---|---|---|---|---|
| Acc | W-F1 | Acc | W-F1 | Acc | W-F1 | Acc | W-F1 | |
| SVM Tree | 67.3 | 66.1 | 66.4 | 66.7 | 60.4 | 50.4 | 58.4 | 45.7 |
| BL-SVM | 68.4 | 67.8 | 65.2 | 65.0 | 60.3 | 52.8 | 59.2 | 50.1 |
| GSV-eVector | 65.7 | 65.5 | 64.2 | 64.3 | 61.1 | 52.3 | 58.4 | 48.4 |
| C-MKL | 71.3 | 71.0 | 67.0 | 67.2 | 72.0 | 72.2 | 62.1 | 58.1 |
| TFN | 73.6 | 73.5 | 70.4 | 70.2 | 62.1 | 61.2 | 64.4 | 61.5 |
| WF-LSTM | 73.9 | 73.3 | 69.5 | 69.4 | 72.7 | 72.8 | 65.6 | 61.5 |
| BC-LSTM | 72.4 | 72.6 | 70.8 | 70.8 | 72.4 | 72.4 | 67.9 | 64.4 |
| H-DMS | 70.4 | 70.2 | 70.2 | 69.8 | 68.4 | 67.6 | 66.7 | 64.3 |
| Our Method | ||||||||