| Literature DB >> 25214675 |
Darrell A Worthy1, W Todd Maddox2.
Abstract
W.K. Estes often championed an approach to model development whereby an existing model was augmented by the addition of one or more free parameters, and a comparison between the simple and more complex, augmented model determined whether the additions were justified. Following this same approach we utilized Estes' (1950) own augmented learning equations to improve the fit and plausibility of a win-stay-lose-shift (WSLS) model that we have used in much of our recent work. Estes also championed models that assumed a comparison between multiple concurrent cognitive processes. In line with this, we develop a WSLS-Reinforcement Learning (RL) model that assumes that the output of a WSLS process that provides a probability of staying or switching to a different option based on the last two decision outcomes is compared with the output of an RL process that determines a probability of selecting each option based on a comparison of the expected value of each option. Fits to data from three different decision-making experiments suggest that the augmentations to the WSLS and RL models lead to a better account of decision-making behavior. Our results also support the assertion that human participants weigh the output of WSLS and RL processes during decision-making.Entities:
Keywords: Decision-making; dual-process; mathematical modeling; reinforcement learning; win-stay-lose-shift
Year: 2014 PMID: 25214675 PMCID: PMC4159167 DOI: 10.1016/j.jmp.2013.10.001
Source DB: PubMed Journal: J Math Psychol ISSN: 0022-2496 Impact factor: 2.223