| Literature DB >> 34652601 |
Hugo Bottemanne1,2,3, Karl J Friston4.
Abstract
Newly emerging infectious diseases, such as the coronavirus (COVID-19), create new challenges for public healthcare systems. Before effective treatments, countering the spread of these infections depends on mitigating, protective behaviours such as social distancing, respecting lockdown, wearing masks, frequent handwashing, travel restrictions, and vaccine acceptance. Previous work has shown that the enacting protective behaviours depends on beliefs about individual vulnerability, threat severity, and one's ability to engage in such protective actions. However, little is known about the genesis of these beliefs in response to an infectious disease epidemic, and the cognitive mechanisms that may link these beliefs to decision making. Active inference (AI) is a recent approach to behavioural modelling that integrates embodied perception, action, belief updating, and decision making. This approach provides a framework to understand the behaviour of agents in situations that require planning under uncertainty. It assumes that the brain infers the hidden states that cause sensations, predicts the perceptual feedback produced by adaptive actions, and chooses actions that minimize expected surprise in the future. In this paper, we present a computational account describing how individuals update their beliefs about the risks and thereby commit to protective behaviours. We show how perceived risks, beliefs about future states, sensory uncertainty, and outcomes under each policy can determine individual protective behaviours. We suggest that these mechanisms are crucial to assess how individuals cope with uncertainty during a pandemic, and we show the interest of these new perspectives for public health policies.Entities:
Keywords: Active inference; Bayesian inference; Coronavirus; Health belief model; Pandemic; Protection motivation theory
Mesh:
Year: 2021 PMID: 34652601 PMCID: PMC8518276 DOI: 10.3758/s13415-021-00947-0
Source DB: PubMed Journal: Cogn Affect Behav Neurosci ISSN: 1530-7026 Impact factor: 3.526
Fig. 1Example of a generative model in AI (Active Inference). A generative model is a probabilistic specification of how outcomes are caused. Usually, the model is expressed in terms of a likelihood (the probability of consequences given causes) and priors over causes. Examples of these probability distributions are provided in the green boxes. Bayesian model inversion refers to the inverse mapping from consequences to causes; i.e. estimating the hidden states that cause outcomes. In approximate Bayesian inference, one specifies the form of an approximate posterior distribution (blue box) with a specified functional form—that is chosen to make model inversion tractable. Left panel: these equations (in the green boxes) specify the generative model: the likelihood is specified by a matrix A. The elements of A encode the probability of each outcome for each hidden state. Cat means a categorical probability distribution. The priors include probabilistic transitions (in B matrices) among hidden states that can depend upon actions, which are determined by policies (i.e., sequences of actions denoted by π). The key aspect of this generative model is that policies are more probable a priori if they minimize the expected free energy G, which depends upon prior preferences about outcomes or costs (encoded by C). Finally, the vector D specifies prior beliefs about the initial state. This completes the specification of the model in terms of parameters that constitute A, B, C, and D. Right panel: the accompanying generative model shown as a Bayesian dependency graph: this Bayesian graph depicts the conditional dependencies among hidden states and how they cause outcomes. Open circles are random variables (hidden states and policies), while filled circles denote observable outcomes. Squares indicate fixed or known variables, such as the model parameters (See Friston, Parr, De Vries (2017) for a detailed explanation of the variables and mathematics).