| Literature DB >> 28827119 |
James E Witnauer1, Ryan Hutchings1, Ralph R Miller2.
Abstract
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data.Entities:
Keywords: Associative learning; Bayesian information criterion; Free parameters; Mathematical models of learning; Pavlovian conditioning; Retrospective revaluation
Mesh:
Year: 2017 PMID: 28827119 PMCID: PMC5640503 DOI: 10.1016/j.beproc.2017.08.004
Source DB: PubMed Journal: Behav Processes ISSN: 0376-6357 Impact factor: 1.777