Literature DB >> 33400903

Active Inference: Demystified and Compared.

Noor Sajid1, Philip J Ball2, Thomas Parr3, Karl J Friston4.   

Abstract

Active inference is a first principle account of how autonomous agents operate in dynamic, nonstationary environments. This problem is also considered in reinforcement learning, but limited work exists on comparing the two approaches on the same discrete-state environments. In this letter, we provide (1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and (2) an explicit discrete-state comparison between active inference and reinforcement learning on an OpenAI gym baseline. We begin by providing a condensed overview of the active inference literature, in particular viewing the various natural behaviors of active inference agents through the lens of reinforcement learning. We show that by operating in a pure belief-based setting, active inference agents can carry out epistemic exploration-and account for uncertainty about their environment-in a Bayes-optimal fashion. Furthermore, we show that the reliance on an explicit reward signal in reinforcement learning is removed in active inference, where reward can simply be treated as another observation we have a preference over; even in the total absence of rewards, agent behaviors are learned through preference learning. We make these properties explicit by showing two scenarios in which active inference agents can infer behaviors in reward-free environments compared to both Q-learning and Bayesian model-based reinforcement learning agents and by placing zero prior preferences over rewards and learning the prior preferences over the observations corresponding to reward. We conclude by noting that this formalism can be applied to more complex settings (e.g., robotic arm movement, Atari games) if appropriate generative models can be formulated. In short, we aim to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrate these behaviors in a OpenAI gym environment, alongside reinforcement learning agents.

Year:  2021        PMID: 33400903     DOI: 10.1162/neco_a_01357

Source DB:  PubMed          Journal:  Neural Comput        ISSN: 0899-7667            Impact factor:   2.026


  14 in total

1.  Active Inference Through Energy Minimization in Multimodal Affective Human-Robot Interaction.

Authors:  Takato Horii; Yukie Nagai
Journal:  Front Robot AI       Date:  2021-11-26

2.  A step-by-step tutorial on active inference and its application to empirical data.

Authors:  Ryan Smith; Karl J Friston; Christopher J Whyte
Journal:  J Math Psychol       Date:  2022-02-04       Impact factor: 1.387

3.  Active inference and the two-step task.

Authors:  Sam Gijsen; Miro Grundei; Felix Blankenburg
Journal:  Sci Rep       Date:  2022-10-21       Impact factor: 4.996

Review 4.  The Free Energy Principle for Perception and Action: A Deep Learning Perspective.

Authors:  Pietro Mazzaglia; Tim Verbelen; Ozan Çatal; Bart Dhoedt
Journal:  Entropy (Basel)       Date:  2022-02-21       Impact factor: 2.524

5.  Permutation Entropy as a Universal Disorder Criterion: How Disorders at Different Scale Levels Are Manifestations of the Same Underlying Principle.

Authors:  Rutger Goekoop; Roy de Kleijn
Journal:  Entropy (Basel)       Date:  2021-12-20       Impact factor: 2.524

6.  Simulating lesion-dependent functional recovery mechanisms.

Authors:  Noor Sajid; Emma Holmes; Thomas M Hope; Zafeirios Fountas; Cathy J Price; Karl J Friston
Journal:  Sci Rep       Date:  2021-04-02       Impact factor: 4.379

7.  Canonical neural networks perform active inference.

Authors:  Takuya Isomura; Hideaki Shimazaki; Karl J Friston
Journal:  Commun Biol       Date:  2022-01-14

8.  Degeneracy and Redundancy in Active Inference.

Authors:  Noor Sajid; Thomas Parr; Thomas M Hope; Cathy J Price; Karl J Friston
Journal:  Cereb Cortex       Date:  2020-10-01       Impact factor: 5.357

9.  The 2-D Cluster Variation Method: Topography Illustrations and Their Enthalpy Parameter Correlations.

Authors:  Alianna J Maren
Journal:  Entropy (Basel)       Date:  2021-03-08       Impact factor: 2.524

10.  On Epistemics in Expected Free Energy for Linear Gaussian State Space Models.

Authors:  Magnus T Koudahl; Wouter M Kouw; Bert de Vries
Journal:  Entropy (Basel)       Date:  2021-11-24       Impact factor: 2.524

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.