| Literature DB >> 33286469 |
Abstract
Models of consciousness are usually developed within physical monist or dualistic frameworks, in which the structure and dynamics of the mind are derived from the workings of the physical brain. Little attention has been given to modelling consciousness within a mental monist framework, deriving the structure and dynamics of the mental world from primitive mental constituents only-with no neural substrate. Mental monism is gaining attention as a candidate solution to Chalmers' Hard Problem on philosophical grounds, and it is therefore timely to examine possible formal models of consciousness within it. Here, I argue that the austere ontology of mental monism places certain constraints on possible models of consciousness, and propose a minimal set of hypotheses that a model of consciousness (within mental monism) should respect. From those hypotheses, it would be possible to construct many formal models that permit universal computation in the mental world, through cellular automata. We need further hypotheses to define transition rules for particular models, and I propose a transition rule with the unusual property of deep copying in the time dimension.Entities:
Keywords: Hard Problem; automata theory; consciousness; idealism; mental models
Year: 2020 PMID: 33286469 PMCID: PMC7517233 DOI: 10.3390/e22060698
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1Structure of the paper.
Figure 2(a) A series of successive experientiae. (b) After the terminal experientia has fissioned.
Figure 3(a) Initial state; (b) one experientia buds; (c) the new line grows for p moments.
Figure 4(a) Initial state; (b) one experientia deep-copies; (c) the new line grows for p moments.
Figure 5(a) Initial state; (b) after k deep-copies from mi.