| Literature DB >> 26518307 |
Udo Boehm1, Guy E Hawkins2, Scott Brown3, Hedderik van Rijn4, Eric-Jan Wagenmakers5.
Abstract
For decades sequential sampling models have successfully accounted for human and monkey decision-making, relying on the standard assumption that decision makers maintain a pre-set decision standard throughout the decision process. Based on the theoretical argument of reward rate maximization, some authors have recently suggested that decision makers become increasingly impatient as time passes and therefore lower their decision standard. Indeed, a number of studies show that computational models with an impatience component provide a good fit to human and monkey decision behavior. However, many of these studies lack quantitative model comparisons and systematic manipulations of rewards. Moreover, the often-cited evidence from single-cell recordings is not unequivocal and complimentary data from human subjects is largely missing. We conclude that, despite some enthusiastic calls for the abandonment of the standard model, the idea of an impatience component has yet to be fully established; we suggest a number of recently developed tools that will help bring the debate to a conclusive settlement.Entities:
Keywords: Collapsing bounds; Decision-making; Drift diffusion model; Reward rate maximization; Single-cell recordings
Mesh:
Year: 2016 PMID: 26518307 PMCID: PMC4887547 DOI: 10.3758/s13423-015-0958-5
Source DB: PubMed Journal: Psychon Bull Rev ISSN: 1069-9384
Fig. 1Three versions of the drift diffusion model for a two-alternative forced choice paradigm, such as the random dot motion task. The upper decision threshold corresponds to a “right” decision and the lower threshold corresponds to a “left” decision. The drift rate is positive in this example (the evidence process drifts upward) indicating that the correct response is “the dots are moving to the right”. The left panel shows the standard DDM with static decision thresholds where a choice is made when the accumulated evidence reaches one of the two thresholds. The middle panel shows a DDM with collapsing thresholds that gradually move inward so that less evidence is required to trigger a decision as time passes (blue lines). This decision policy predicts shorter decision times than the DDM with static thresholds when faced with weak evidence (i.e., a low drift rate) as it partially truncates the negatively skewed distribution of response times. The right panel shows a DDM with an urgency gating mechanism. The accumulated evidence is multiplied with an urgency signal that increases with increasing decision times (blue line). This decision policy again predicts shorter decision times than the DDM with static thresholds but also increased variability as moment-to-moment variations in the accumulated evidence are also multiplied
Fig. 2DDMs with static and dynamic decision criteria fitted to four data sets (subset of results reported in Forstmann, et al., 2015). Column names cite the original data source, where example data sets from non-human primates and humans are shown in the left two and right two columns, respectively. The upper row shows the averaged estimated collapsing (solid lines) and static (dashed lines) thresholds across participants. The second, third and fourth rows display the fit of the static thresholds, urgency gating, and collapsing thresholds models to data, respectively. The y-axes represent response time and x-axes represent probability of a correct choice. Green and red crosses indicate correct and error responses, respectively, and black lines represent model predictions. Vertical position of the crosses indicate the 10th, 30th, 50th, 70th, and 90th percentiles of the response time distribution. When the estimated collapsing and static thresholds markedly differed (first and third columns), the DDMs with dynamic decision criteria provided a better fit to data than the DDM with static criteria. When the collapsing thresholds were similar to the static thresholds (second and fourth columns), the predictions of the static and dynamic DDMs were highly similar, which indicates the extra complexity of the dynamic DDMs was not warranted in those data sets. For full details see Hawkins, Forstmann, et al., 2015
Fig. 3Behavioral and physiological variables used in the evaluation of DDMs. The left panel shows a response time distribution, the classic behavioral variable against which DDMs are tested. The middle panel shows activity patterns of individual neurons (bottom) and the average firing rates of such a neuron population (top). The right panel shows an averaged EEG waveform, which reflects the aggregate activity of large neuron ensembles in the human cortex. Model comparisons based on behavioral outcomes such as response time distributions are limited in their ability to discriminate between models with different process assumptions but similar behavioral predictions. Physiological measurements such as single-cell recordings in primates and EEG recordings in humans allow for thorough evaluation of the process assumptions underlying candidate models. A question that still remains unanswered is how physiological measurements at different levels of aggregation (i.e., single neurons vs. large neuron populations) relate to each other, and the degree to which they constrain process models (full behavioral and EEG data reported in Boehm, Van Maanen, Forstmann, & Van Rijn, 2014; single-cell data were generated using a Poisson model)