| Literature DB >> 31586868 |
Chandramouli Chandrasekaran1, Guy E Hawkins2.
Abstract
BACKGROUND: Decision-making is the process of choosing and performing actions in response to sensory cues to achieve behavioral goals. Many mathematical models have been developed to describe the choice behavior and response time (RT) distributions of observers performing decision-making tasks. However, relatively few researchers use these models because it demands expertise in various numerical, statistical, and software techniques. NEWEntities:
Keywords: AIC; BIC; Choice; DDM; Decision making; Diffusion decision model; Model selection; Response time (RT); Urgency gating
Year: 2019 PMID: 31586868 PMCID: PMC6980795 DOI: 10.1016/j.jneumeth.2019.108432
Source DB: PubMed Journal: J Neurosci Methods ISSN: 0165-0270 Impact factor: 2.390
List of symbols used in the decision-making models implemented in ChaRTr.
| Parameter | Description |
|---|---|
| State of the decision variable at time
| |
| Δ | Time step of the decision variable. |
| Starting state of the decision variable (i.e.,
| |
| Rate at which the decision variable
accumulates decision-relevant information (drift rate,
| |
| Urgency signal that dynamically modulates the
decision variable as a function of | |
| Upper and lower response boundaries that terminate the decision process. | |
| Upper and lower response boundaries that vary
as a function of | |
| Time required for stimulus encoding and motor
preparation/execution (non-decision time), and decision-to-decision
variability in non-decision time. | |
|
| Within-decision variability in the diffusion process. Represents the standard deviation of a normal distribution. By convention, set to a fixed value to satisfy a scaling property of the model. |
| Momentary sensory evidence at time
| |
| Intercept and variability of the intercept in urgency based models with linear urgency signals. | |
|
| Normal distribution with zero mean and unit variance. |
|
| Uniform distribution over the interval
|
List of the 37 models available in ChaRTr along with the individual parameters in each model and the total number of parameters. n refers to the number of stimulus conditions used. a is the short form of a
| Abbreviation | Parameters |
|
|---|---|---|
|
| ||
| References: | ||
| DDM | ||
| DDMS | ||
| DDMS | ||
| DDMS | ||
| DDMS | ||
| DDMS | ||
|
| ||
| References: | ||
| cDDM | ||
| cDDMS | ||
| cDDMS | ||
| cDDMS | ||
| cDDMS | ||
| cDDMS | ||
| References: | ||
| cfkDDM | ||
| cfkDDMS | ||
| cfkDDMS | ||
| cfkDDMS | ||
| cfkDDMS | ||
|
| ||
| References: | ||
| uDDM | ||
| uDDMS | ||
| uDDMS | ||
| uDDMS | ||
| uDDMS | ||
| uDDMS | ||
|
| ||
| References: | ||
| dDDM | ||
| dDDMS | ||
| dDDMS | ||
| dDDMS | ||
| dDDMS | ||
|
| ||
| References: | ||
| UGM | ||
| UGMS | ||
| UGMS | ||
| UGMS | ||
|
| ||
| ( | ||
| bUGM | ||
| bUGMS | ||
| bUGMS | ||
| bUGMS | ||
| bUGMS | ||
Fig. 1.Schematic of some sequential sampling models of decision-making incorporated in ChaRTr. (A) The DDM model is the simplest example of a diffusion model of decision-making. (B) A variant of the DDM with variable non-decision time (S), variable drift-rate (S) and a variable start point (S). (C) A DDM with collapsing bounds and variability in the non-decision time and drift rate. The function A(t) takes the form of a Weibull function as defined in Eq. (6). (D) A variant of the DDM with variable non-decision time and drift rate, and an “urgency signal”. This urgency signal grows with elapsed decision time, which is implemented by multiplying the decision variable by the increasing function of time γ(t) (Eq. (10), following Ditterich, 2006a). (E) UGM with variable drift rate (S) and variable non decision time (S). In the standard UGM, the urgency signal is only thought to depend on time and thus starts at 0. The sensory evidence is passed through a low pass filter (typically a 100–250 ms time constant, Carland et al., 2015; Thura et al., 2012). The sensory evidence is then multiplied by the urgency signal to produce a decision variable that is compared to the decision boundaries. (F) Schematic of urgency signals with an intercept (top panel) and a variable intercept (bottom panel).
Fig. 2.A quantile probability (QP) plot of choice and RT data from a hypothetical decision-making experiment with three levels of stimulus difficulty. The three difficulty levels are represented as vertical columns mirrored around the midpoint of the x-axis (0.5). In this example, the lowest accuracy condition had ~55% correct responses, so the RTs for correct responses in this condition are located at 0.55 on the x-axis and the corresponding RTs for error responses are located at 1 − 0.55 = 0.45 on the x-axis; these two RT distributions are highlighted in gray bars. For each RT distribution we plot along the y-axis the 10th, 30th, 50th, 70th, 90th percentiles (i.e., 0.1, 0.3, 0.5, 0.7, 0.9 quantiles), separately for correct and error responses in each of the three difficulty levels. For clarity, correct responses are shown in blue and error responses are shown in yellow. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 3.ChaRTr flow chart. Models are specified and once data is available, the parameters are estimated through the optimization procedure. Once parameter estimation is complete, the final goodness of fit statistic is calculated for every model under consideration, which is used for subsequent model selection analyses.
Fig. 4.Flow chart for the parameter estimation component of ChaRTr, which uses the differential evolution optimization algorithm (Mullen et al., 2011).
Creating a shared library for loading the specified models into R.
|
|
The required raw data format for parameter estimation in ChaRTr.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
R code for simulating RT and choice responses from the simple diffusion decision model (DDM).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Fig. 5.Quantile probability plots of data simulated from four models in ChaRTr. (A) DDM, (B) DDM with variable drift rates, starting state and non-decision time (DDMSSS), (C) urgency gating model with variable drift rates (UGMSv), and (D) DDM with an urgency signal and a variable drift rate defined as per Ditterich (2006a, dDDMS)Ditterich, 2006aDitterich, 2006aDitterich (2006a, dDDMS)Ditterich, 2006aDitterich, 2006aDitterich (2006a, dDDMS)Ditterich, 2006aDitterich (2006a, dDDMS)Ditterich, 2006aDitterich (2006a, dDDMS)Ditterich, 2006aDitterich (2006a, dDDMS)Ditterich, 2006aDitterich (2006a, dDDMS). Gray points denote data. Lines are drawn for visualization purposes.
Fig. 6.Model selection and parameter estimation outcomes from applying a range of cognitive models of decision-making to choice and RT data from five hypothetical observers (case study 1). A–C shows outcomes from one hypothetical observer and E shows outcomes from a second hypothetical observer. Data were generated using the model DDMSS. (A) AIC values for each model with the DDM model as the reference. To guide the eye and ease readability, bars are colored based on whether they are better or worse than the DDM in fitting the data. The best model is shown in green, and the next five best models are shown in orange. The remaining models better than the DDM are shown in gray and models worse than the DDM are shown in purple. (B) Same as A but using BIC as the model comparison metric. (C) Akaike weights and BIC-based approximate posterior model probabilities for the top six models that provided the best account of the data. ChaRTr correctly identifies the true data-generating model (DDMSS) as the one of the most likely candidates for describing the data. (D) Data-generative and estimated parameter values for the DDMSS model shown in A. Close alignment indicates ChaRTr recovered the true parameter values. (E) Akaike weights and posterior model probabilities from another hypothetical observer. Color conventions as in C. (F) Average akaike weights and posterior model probabilities across all five hypothetical observers, assuming the observers are independent. Reassuringly, DDMSS is identified as one of the most plausible models for the data. (For interpretation of the references to color in the print version for this figure legend, the reader is referred to the web version of this article.).
Fig. 7.Quantile probability (QP) plots showing correct RTs (blue) and error RTs (orange) for two hypothetical observers (case study 1), along with the model predictions (gray dots). Predictions from the four best-fitting models are shown along with the simplest model the DDM. The best fitting models DDMSS, DDMS, DDMSSS, cfkDDMSS, and dDDMSS are shown. Numbers at the top of each plot show the log likelihood, AIC, and BIC for the model under consideration. AIC and BIC are computed with respect to the DDM. Higher values of log-likelihood are better. When assuming the DDM as the base (reference) model and AIC as the penalized model selection metric, the model DDMSS provides the best account of the data. When using BIC as the penalized model selection metric, the model DDMS provides a better description of the data. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 8.Model selection and parameter estimation outcomes from applying a range of cognitive models of decision-making to data from hypothetical observers (case study 2). Decision-making in these hypothetical observers is controlled by the model bUGMS. (A) AIC values as a function of model with the DDM model as the reference for one hypothetical observer, Subj 3. Color conventions as in Fig. 6A. (B) BIC values as a function of model with the DDM model as the reference for the same subject shown in A. (C) Akaike Weights and Posterior model probabilities for the top six models that provided the best account of Subj 3’s behavior. (D) Results for another hypothetical subject. (E) Results for the population of hypothetical subjects. The most probable model for this set of hypothetical observers is the generative model, bUGMS. However, we note that other models such as bUGM, UGMS, and uDDMS provide quite good descriptions of the behavior. This result is in keeping with the general notion that model selection ought to be used as a guide to the most likely models and not necessarily to argue for a “best” model. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 9.Model selection outcomes from applying a range of cognitive models of decision-making to data from two monkeys (Roitman and Shadlen, 2002). (A) and (B) show outcomes from monkeys b and n to compare models with various forms of urgency vs. simple diffusion decision models without urgency. For both monkeys, ChaRTr suggests models with urgency are better candidates for describing the data than DDMs without urgency. (C) and (D) show outcomes from monkeys b and n when comparing UGM vs. DDM models. For both monkeys, UGM based models substantially outperform the DDM based models.
Fig. 10.Quantile probability (QP) plots showing data in blue (corrects) and yellow crosses (errors) for the two monkeys from Roitman and Shadlen (2002), along with the model predictions (gray dots). Predictions from DDMSSS are shown along with four other models uDDMSS, bUGMSS, uDDMSS, bUGMSS. Numbers at the top of each plot show the Log likelihood, the AIC and the BIC for the model under consideration. Higher, that is, more positive values of log likelihood are better. AIC and BIC are reported assuming DDMSSS as the base (reference) model. For both monkeys the model bUGMSS is the best model for describing the data out of these candidate set of models. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 11.Computation time for models in ChaRTr. (A) Run time as a function of the number of parameters in the models for the four different settings we considered. (B) Computation time for each model under each of the four settings. (C) Total time for parameter estimation under the different parameter settings and random number generators. In all panels, np refers to the number of particles used in the differential evolution algorithm, n is the number of Monte Carlo replicates used for simulation of each of these models. Higher values of n provide more precision but require more time.
Fig. 12.Optimizing computational speed is not detrimental to model selection analysis in ChaRTr. Akaike weights averaged over five hypothetical subjects in a model selection analysis with different settings of the random number generator, number of particles, and number of simulated trials per particle for case study 1 (A) and case study 2 (B). For both case studies, ChaRTr reliably identifies the correct data-generating model and in many cases agrees on the second best model for the data. We also note that the exact ranking of the models slightly differ across hyperparameter settings, but the set of identified models is consistent.