Literature DB >> 33733157

Are GRU Cells More Specific and LSTM Cells More Sensitive in Motive Classification of Text?

Nicole Gruber1, Alfred Jockisch2.   

Abstract

In the Thematic Apperception Test, a picture story exercise (TAT/PSE; Heckhausen, 1963), it is assumed that unconscious motives can be detected in the text someone is telling about pictures shown in the test. Therefore, this text is classified by trained experts regarding evaluation rules. We tried to automate this coding and used a recurrent neuronal network (RNN) because of the sequential input data. There are two different cell types to improve recurrent neural networks regarding long-term dependencies in sequential input data: long-short-term-memory cells (LSTMs) and gated-recurrent units (GRUs). Some results indicate that GRUs can outperform LSTMs; others show the opposite. So the question remains when to use GRU or LSTM cells. The results show (N = 18000 data, 10-fold cross-validated) that the GRUs outperform LSTMs (accuracy = .85 vs. .82) for overall motive coding. Further analysis showed that GRUs have higher specificity (true negative rate) and learn better less prevalent content. LSTMs have higher sensitivity (true positive rate) and learn better high prevalent content. A closer look at a picture x category matrix reveals that LSTMs outperform GRUs only where deep context understanding is important. As these both techniques do not clearly present a major advantage over one another in the domain investigated here, an interesting topic for future work is to develop a method that combines their strengths.
Copyright © 2020 Gruber and Jockisch.

Entities:  

Keywords:  GRU; LSTM; RNN; implicit motive; text classification; thematic appeception test

Year:  2020        PMID: 33733157      PMCID: PMC7861254          DOI: 10.3389/frai.2020.00040

Source DB:  PubMed          Journal:  Front Artif Intell        ISSN: 2624-8212


  4 in total

1.  An examination of interrater reliability for scoring the Rorschach Comprehensive System in eight data sets.

Authors:  Gregory J Meyer; Mark J Hilsenroth; Dirk Baxter; John E Exner; J Christopher Fowler; Craig C Piers; Justin Resnick
Journal:  J Pers Assess       Date:  2002-04

Review 2.  Intraclass correlations: uses in assessing rater reliability.

Authors:  P E Shrout; J L Fleiss
Journal:  Psychol Bull       Date:  1979-03       Impact factor: 17.737

3.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

4.  Are implicit motives revealed in mere words? Testing the marker-word hypothesis with computer-based text analysis.

Authors:  Oliver C Schultheiss
Journal:  Front Psychol       Date:  2013-10-16
  4 in total
  4 in total

1.  Heterogeneous Ensemble Deep Learning Model for Enhanced Arabic Sentiment Analysis.

Authors:  Hager Saleh; Sherif Mostafa; Abdullah Alharbi; Shaker El-Sappagh; Tamim Alkhalifah
Journal:  Sensors (Basel)       Date:  2022-05-12       Impact factor: 3.847

Review 2.  Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions.

Authors:  Iqbal H Sarker
Journal:  SN Comput Sci       Date:  2021-08-18

3.  The Real-Time and Patient-Specific Prediction for Duration and Recovery Profile of Cisatracurium Based on Deep Learning Models.

Authors:  Kan Wang; Binyu Gao; Heqi Liu; Hui Chen; Honglei Liu
Journal:  Front Pharmacol       Date:  2022-02-04       Impact factor: 5.810

Review 4.  A survey of uncover misleading and cyberbullying on social media for public health.

Authors:  Omar Darwish; Yahya Tashtoush; Amjad Bashayreh; Alaa Alomar; Shahed Alkhaza'leh; Dirar Darweesh
Journal:  Cluster Comput       Date:  2022-08-23       Impact factor: 2.303

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.