| Literature DB >> 33733186 |
Anthony G Chen1, David Benrimoh2,3, Thomas Parr3, Karl J Friston3.
Abstract
This paper offers a formal account of policy learning, or habitual behavioral optimization, under the framework of Active Inference. In this setting, habit formation becomes an autodidactic, experience-dependent process, based upon what the agent sees itself doing. We focus on the effect of environmental volatility on habit formation by simulating artificial agents operating in a partially observable Markov decision process. Specifically, we used a "two-step" maze paradigm, in which the agent has to decide whether to go left or right to secure a reward. We observe that in volatile environments with numerous reward locations, the agents learn to adopt a generalist strategy, never forming a strong habitual behavior for any preferred maze direction. Conversely, in conservative or static environments, agents adopt a specialist strategy; forming strong preferences for policies that result in approach to a small number of previously-observed reward locations. The pros and cons of the two strategies are tested and discussed. In general, specialization offers greater benefits, but only when contingencies are conserved over time. We consider the implications of this formal (Active Inference) account of policy learning for understanding the relationship between specialization and habit formation.Entities:
Keywords: Bayesian; active inference; generative model; learning strategies; predictive processing; preferences
Year: 2020 PMID: 33733186 PMCID: PMC7861269 DOI: 10.3389/frai.2020.00069
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212