Literature DB >> 19784854

Good judgments do not require complex cognition.

Julian N Marewski1, Wolfgang Gaissmaier, Gerd Gigerenzer.   

Abstract

What cognitive capabilities allow Homo sapiens to successfully bet on the stock market, to catch balls in baseball games, to accurately predict the outcomes of political elections, or to correctly decide whether a patient needs to be allocated to the coronary care unit? It is a widespread belief in psychology and beyond that complex judgment tasks require complex solutions. Countering this common intuition, in this article, we argue that in an uncertain world actually the opposite is true: Humans do not need complex cognitive strategies to make good inferences, estimations, and other judgments; rather, it is the very simplicity and robustness of our cognitive repertoire that makes Homo sapiens a capable decision maker.

Entities:  

Mesh:

Year:  2009        PMID: 19784854      PMCID: PMC2860098          DOI: 10.1007/s10339-009-0337-0

Source DB:  PubMed          Journal:  Cogn Process        ISSN: 1612-4782


Explanation…demands a theory…that predicts effects of the manipulated variables on performance of each task. Crude distinctions between “systems” are seldom sufficient for this purpose. Further, once a sufficiently elaborate process model is in hand, it is not clear that the notion of a system is any longer of much use. Once the model has been spelled out, it makes little difference whether its components are called systems, modules, processes, or something else; the explanatory burden is carried by the nature of the proposed mechanisms and their interactions, not by what they are called. (Hintzman 1990, p. 121) Evans and Over (2009) provided comments on our article (Marewski et al. 2009). Here, we respond to their major points.

Models of heuristics specify the proportion of correct and false judgments

In their critique of our article, Evans and Over’s (2009) first point is that “heuristics can often lead to biases as well as effective responding” (p. 2) and that we write “as if heuristics were invariably rational and error free” (p. 5). This is a most surprising conjecture. Tversky and Kahneman (1974) correctly argued that heuristics are in general quite effective but sometimes lead to severe errors. But since they had no computational models of availability, representativeness, and anchoring, they could not spell out the “sometimes.” The fast and frugal heuristics framework has spelled out the sometimes by developing computational models of heuristics that allow for quantitative predictions about how many errors heuristics make, or how their performance compares to that of more complex models. Here are three examples. First, Goldstein and Gigerenzer (2002) showed that when the recognition validity is 0.80 and a person recognizes half of the objects without having any further knowledge, then by relying on the recognition heuristic, this person would get it right in 65% of the cases. This means this person would get it wrong in 35% of the cases. This is an example of an analytical result about how many errors the use of a certain heuristic implies given a certain knowledge state of the person. Second, across 20 different studies on predicting psychological, demographic, economic, and other criteria, take-the-best, tallying, multiple regression, and minimalist made correct predictions, on average, in 71, 69, 68, and 65% of the cases (Czerlinski et al. 1999, p. 105). This means that, on average, the strategies made errors 29, 31, 32, and 35% of the time, respectively. This is a simulation result that specifies the proportion of errors heuristics and the more complex strategy multiple regression make in prediction. Third, consider the question faced by managers of how to tell whether a customer is still active or has become inactive in a large customer database. Wübben and Wangenheim (2008) reported that managers rely on one-reason decision making—specifically, the hiatus heuristic: If customers have not purchased anything for 9 (in one case, 6) months, conclude that they are inactive, otherwise active. In three different companies, this heuristic overall correctly classified 83, 77, and 77% of the customers, respectively, compared to a Pareto/NBD (negative binomial distribution) model, a standard optimization technique in this field, which classified 75, 77, and 74% of the customers correctly. This means the heuristic got it wrong in 17, 23, and 23% of the cases, while the optimization model got it wrong slightly more often. This was an empirical study that compared actual experts’ heuristics with an optimization model. These examples illustrate that heuristics are not error free, and that formal models allow us to quantify the errors and compare them to the errors other models make. Therefore, it is hard to understand how Evans and Over (2009) can interpret our writing as claiming that heuristics are error free. By the way, the second and third result illustrate that it is not so infrequently in the real world that one-reason decision-making heuristics are faster, more frugal, and more accurate at the same time. This leads to a second misunderstanding.

Heuristics do not always imply effort-accuracy trade-offs

According to Evans and Over (2009), heuristics are “short-cut methods of solving problems that pay a cost in accuracy for what they gain in speed” (p. 5). With this they have repeated the standard account of heuristics, which we and others have shown to be incorrect. As the examples above illustrate and as we pointed out throughout our paper (Marewski et al. 2009), heuristics do not always imply effort-accuracy trade-offs. Computer simulations and experiments have shown that fast and frugal decision-making strategies can often lead to more accurate inferences than strategies that use more information and computation. The analysis of the situations in which this occurs is part of the study of ecological rationality (Gigerenzer and Brighton 2009). An organism (or system or organization, etc.) that is faced with uncertainty sometimes needs to ignore information to make good decisions, and therefore simplicity can pay in ways beyond allowing for faster decisions.

Logic can be easy

Evans and Over’s (2009) second major conjecture is that “the rules of logic and probability theory can sometimes be easy to apply” (p. 4). Sometimes—no problem. But if we remember correctly, research on reasoning has also emphasized that people systematically violate these rules. Prominent examples include the conjunction fallacy, the Wason selection task, base rate neglect, or the belief bias research of Jonathan Evans (e.g., Evans 2007).

Toward formal models instead of vague labels

Evans and Over’s (2009) final point is that the fast and frugal heuristics framework ignores research on dual-process theory, but that “the role of their fast and frugal heuristics can only [be] correctly understood within such a framework” (p. 4). We disagree. From our point of view, the dual-process framework that Evans and Over refer to is too vague to be useful. In the absence of formal models, none of the important results in the three initial examples given in this reply—such as when the recognition heuristic, take-the-best, or tallying lead to less-is-more effects, or the test of the hiatus heuristic against optimizing models—could have been derived. Gigerenzer and Regier (1996) and others (Keren and Schul 2009) have criticized in detail the jumbling together of various theories into a loose list of opposing labels. Indeed, it seems that much of the data and theory on a general dual-process framework is mired in debates about jargon. This use of jargon to redescribe jargon is a hallmark of theoretical stagnation (Kuhn 1962).

Conclusion

If we are to make scientific progress, we must move beyond naming and renaming vague ideas. General verbal distinctions such as “rule-based versus instance-based” do not represent progress beyond what was well documented nearly 40 years ago, in the 1970s, unless these labels can be substantiated in terms of formal models that precisely define what they mean and what they predict. As stressed by Hintzman (1990), what really matters is the precision with which psychological models are defined and not how they are labeled. In contrast to Evans (2008), who proposed replacing the labels System 1 and System 2 with the terms Type 1 and Type 2 processes, we therefore suggest that it is instead the dual-process framework with its two “black boxes” that should be replaced by computationally precise formal models. Scientific progress is not found in the accumulation of marketable labels. Instead, it requires the development of precise theories of psychological processes that lead to clear, testable, quantitative predictions.
  56 in total

1.  The priority heuristic: making choices without trade-offs.

Authors:  Eduard Brandstätter; Gerd Gigerenzer; Ralph Hertwig
Journal:  Psychol Rev       Date:  2006-04       Impact factor: 8.934

2.  Take the best or look at the rest? Factors influencing "one-reason" decision making.

Authors:  Ben R Newell; David R Shanks
Journal:  J Exp Psychol Learn Mem Cogn       Date:  2003-01       Impact factor: 3.051

3.  Take the best versus simultaneous feature matching: probabilistic inferences from memory and effects of representation format.

Authors:  Arndt Bröder; Stefanie Schiffer
Journal:  J Exp Psychol Gen       Date:  2003-06

4.  Predicting short-term stock fluctuations by using processing fluency.

Authors:  Adam L Alter; Daniel M Oppenheimer
Journal:  Proc Natl Acad Sci U S A       Date:  2006-06-05       Impact factor: 11.205

Review 5.  Human learning and memory: connections and dissociations.

Authors:  D L Hintzman
Journal:  Annu Rev Psychol       Date:  1990       Impact factor: 24.137

6.  Simple predictions fueled by capacity limitations: when are they successful?

Authors:  Wolfgang Gaissmaier; Lael J Schooler; Jörg Rieskamp
Journal:  J Exp Psychol Learn Mem Cogn       Date:  2006-09       Impact factor: 3.051

7.  Why you think milan is larger than modena: neural correlates of the recognition heuristic.

Authors:  Kirsten G Volz; Lael J Schooler; Ricarda I Schubotz; Markus Raab; Gerd Gigerenzer; D Yves von Cramon
Journal:  J Cogn Neurosci       Date:  2006-11       Impact factor: 3.225

8.  Searching for patterns in random sequences.

Authors:  George Wolford; Sarah E Newman; Michael B Miller; Gagan S Wig
Journal:  Can J Exp Psychol       Date:  2004-12

9.  The smart potential behind probability matching.

Authors:  Wolfgang Gaissmaier; Lael J Schooler
Journal:  Cognition       Date:  2008-11-18

10.  Heuristic and linear models of judgment: matching rules and environments.

Authors:  Robin M Hogarth; Natalia Karelaia
Journal:  Psychol Rev       Date:  2007-07       Impact factor: 8.934

View more
  21 in total

1.  Information search and decision making: effects of age and complexity on strategy use.

Authors:  Tara L Queen; Thomas M Hess; Gilda E Ennis; Keith Dowd; Daniel Grühn
Journal:  Psychol Aging       Date:  2012-06-04

2.  From recognition to decisions: extending and testing recognition-based models for multialternative inference.

Authors:  Julian N Marewski; Wolfgang Gaissmaier; Lael J Schooler; Daniel G Goldstein; Gerd Gigerenzer
Journal:  Psychon Bull Rev       Date:  2010-06

Review 3.  Reconsidering "evidence" for fast-and-frugal heuristics.

Authors:  Benjamin E Hilbig
Journal:  Psychon Bull Rev       Date:  2010-12

4.  Decision-making under risk conditions is susceptible to interference by a secondary executive task.

Authors:  Katrin Starcke; Mirko Pawlikowski; Oliver T Wolf; Christine Altstötter-Gleich; Matthias Brand
Journal:  Cogn Process       Date:  2011-01-06

5.  The impact of epistemological beliefs and cognitive ability on recall and critical evaluation of scientific information.

Authors:  Insa Feinkohl; Danny Flemming; Ulrike Cress; Joachim Kimmerle
Journal:  Cogn Process       Date:  2016-01-09

6.  When high working memory capacity is and is not beneficial for predicting nonlinear processes.

Authors:  Helen Fischer; Daniel V Holt
Journal:  Mem Cognit       Date:  2017-04

Review 7.  Basal ganglia circuits for reward value-guided behavior.

Authors:  Okihide Hikosaka; Hyoung F Kim; Masaharu Yasuda; Shinya Yamamoto
Journal:  Annu Rev Neurosci       Date:  2014       Impact factor: 12.449

8.  Multi-criteria clinical decision support: A primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare.

Authors:  James G Dolan
Journal:  Patient       Date:  2010       Impact factor: 3.883

9.  A psychological approach to learning causal networks.

Authors:  Manaf Zargoush; Farrokh Alemi; Vinzenzo Esposito Vinzi; Jee Vang; Raya Kheirbek
Journal:  Health Care Manag Sci       Date:  2013-09-19

10.  Interpretation of evidence in data by untrained medical students: a scenario-based study.

Authors:  Thomas V Perneger; Delphine S Courvoisier
Journal:  BMC Med Res Methodol       Date:  2010-08-26       Impact factor: 4.615

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.