Literature DB >> 33730097

Reevaluating pragmatic reasoning in language games.

Les Sikos1, Noortje J Venhuizen1, Heiner Drenhaus1, Matthew W Crocker1.   

Abstract

The results of a highly influential study that tested the predictions of the Rational Speech Act (RSA) model suggest that (a) listeners use pragmatic reasoning in one-shot web-based referential communication games despite the artificial, highly constrained, and minimally interactive nature of the task, and (b) that RSA accurately captures this behavior. In this work, we reevaluate the contribution of the pragmatic reasoning formalized by RSA in explaining listener behavior by comparing RSA to a baseline literal listener model that is only driven by literal word meaning and the prior probability of referring to an object. Across three experiments we observe only modest evidence of pragmatic behavior in one-shot web-based language games, and only under very limited circumstances. We find that although RSA provides a strong fit to listener responses, it does not perform better than the baseline literal listener model. Our results suggest that while participants playing the role of the Speaker are informative in these one-shot web-based reference games, participants playing the role of the Listener only rarely take this Speaker behavior into account to reason about the intended referent. In addition, we show that RSA's fit is primarily due to a combination of non-pragmatic factors, perhaps the most surprising of which is that in the majority of conditions that are amenable to pragmatic reasoning, RSA (accurately) predicts that listeners will behave non-pragmatically. This leads us to conclude that RSA's strong overall correlation with human behavior in one-shot web-based language games does not reflect listener's pragmatic reasoning about informative speakers.

Entities:  

Year:  2021        PMID: 33730097      PMCID: PMC7968720          DOI: 10.1371/journal.pone.0248388

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


  11 in total

1.  A Monte Carlo evaluation of tests for comparing dependent correlations.

Authors:  James B Hittner; Kim May; N Clayton Silver
Journal:  J Gen Psychol       Date:  2003-04

2.  Predicting pragmatic reasoning in language games.

Authors:  Michael C Frank; Noah D Goodman
Journal:  Science       Date:  2012-05-25       Impact factor: 47.728

3.  Conceptual pacts and lexical choice in conversation.

Authors:  S E Brennan; H H Clark
Journal:  J Exp Psychol Learn Mem Cogn       Date:  1996-11       Impact factor: 3.051

4.  Resolving uncertainty in plural predication.

Authors:  Gregory Scontras; Noah D Goodman
Journal:  Cognition       Date:  2017-07-27

5.  Knowledge and implicature: modeling language understanding as social cognition.

Authors:  Noah D Goodman; Andreas Stuhlmüller
Journal:  Top Cogn Sci       Date:  2013-01

6.  Reference Production as Search: The Impact of Domain Size on the Production of Distinguishing Descriptions.

Authors:  Albert Gatt; Emiel Krahmer; Kees van Deemter; Roger P G van Gompel
Journal:  Cogn Sci       Date:  2016-06-06

7.  When redundancy is useful: A Bayesian approach to "overinformative" referring expressions.

Authors:  Judith Degen; Robert D Hawkins; Caroline Graf; Elisa Kreiss; Noah D Goodman
Journal:  Psychol Rev       Date:  2020-04-02       Impact factor: 8.934

8.  Visual Complexity and Its Effects on Referring Expression Generation.

Authors:  Micha Elsner; Alasdair Clarke; Hannah Rohde
Journal:  Cogn Sci       Date:  2017-06-26

9.  cocor: a comprehensive solution for the statistical comparison of correlations.

Authors:  Birk Diedenhofen; Jochen Musch
Journal:  PLoS One       Date:  2015-04-02       Impact factor: 3.240

10.  Reasoning in Reference Games: Individual- vs. Population-Level Probabilistic Modeling.

Authors:  Michael Franke; Judith Degen
Journal:  PLoS One       Date:  2016-05-05       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.