| Literature DB >> 35444580 |
Axel Constant1, Paul Badcock2,3, Karl Friston4, Laurence J Kirmayer5.
Abstract
This paper proposes an integrative perspective on evolutionary, cultural and computational approaches to psychiatry. These three approaches attempt to frame mental disorders as multiscale entities and offer modes of explanations and modeling strategies that can inform clinical practice. Although each of these perspectives involves systemic thinking, each is limited in its ability to address the complex developmental trajectories and larger social systemic interactions that lead to mental disorders. Inspired by computational modeling in theoretical biology, this paper aims to integrate the modes of explanation offered by evolutionary, cultural and computational psychiatry in a multilevel systemic perspective. We apply the resulting Evolutionary, Cultural and Computational (ECC) model to Major Depressive Disorder (MDD) to illustrate how this integrative approach can guide research and practice in psychiatry.Entities:
Keywords: computational phenotyping; computational psychiatry; cultural psychiatry; evolutionary psychiatry; major depressive disorder (MDD)
Year: 2022 PMID: 35444580 PMCID: PMC9013887 DOI: 10.3389/fpsyt.2022.763380
Source DB: PubMed Journal: Front Psychiatry ISSN: 1664-0640 Impact factor: 5.435
Figure 1Heuristic description of perception and action as inference. This figure presents a Bayesian graphical model of the relation between (priors and likelihood) parameters that constitute an elementary generative model for action and perception. Here, we can imagine an agent infers the cause “s” of its sensation “o” that it receives at time “1”. This inference is based on a likelihood A (P(ot|st)) about what generally causes red entries (e.g., is this a car or a ball?), and prior probabilistic knowledge about where the agent currently is in the world (e.g., am I facing a ball or a car?), or initial state D (P(s1)). Based on A and D, the agent can infer s1 (e.g., I am seeing a car). Then, assuming the agent acts based on its perception, it can further leverage knowledge about the dynamics of the world in which it exists, that is, of the transitions between different states of the world (e.g., ball to ball, ball to car, car to ball, etc.), which corresponds to probabilistic knowledge about transitions B (P(st+1|st,pi)). Based on A, B, and D, the agent can infer the possible courses of action “pi” to take and the future sensory outcome that it will experience (e.g., red; blue). Representations of causes in the world are hidden states denoted by “s”, representations of allowable policies are denoted by “pi”. Both must be inferred; hence they are in open white circles. Note that “o2” also has to be inferred, since policy selection (i.e., action) involves selecting the course of action toward future sensations. What is known to the agent are the parameters A, B, and D. Heuristically, perception (small L shape) rests on (implicitly) asking the question: “Given that I see ‘red’ (o1), and knowing that there is a prior probability (D) of cars and balls in this environment (e.g., open street), and knowing that both balls and cars can be ‘red’ with a given probability (likelihood A), did a car or a ball cause me to see ‘red’?” The answer to that question is the inference in s1. In turn, action rests on asking the question: “Given what I am currently in s1, and knowing the probability of transitioning from s1 to another state (prior about transition B), and knowing the sort of course of action I can engage (policy pi), what course of action will lead to the most likely outcome in the future?”. Inferring the current hidden state, the policy and future state and observation is done in a Bayes-optimal fashion, using something called Bayesian belief updating, which can be formulated in a neurobiologically plausible fashion. The inference is always Bayes-optimal; however, if, for instance, the truth about the world is that cars are never blue, and if the agent believes that there is a high probability that cars are blue, this may lead to false perceptual inference. In short, the agent must have the right priors to provide an optimal account of her sensed world.
Modes of explanation and modeling strategies.
|
|
|
|
|
|---|---|---|---|
| The concept of dysfunction | Developmentally aggravated vulnerabilities understood as proximate causes shaped by ultimate causes | Darwinian rationales (cf. | |
| The concept of the harmful | Behavioral patterns causing psychological distress and functional impairment configured at the subjective level, and shaped by socionormative interactions, and cultural affordances | Ecosocial model | |
| The concept of dysfunction; potential to model how harm and dysfunction interact | Suboptimal inference of perception and action caused by lesioned or atypically learned prior beliefs | Computational phenotyping |