Abstract

Models of consciousness are usually developed within physical monist or dualistic frameworks, in which the structure and dynamics of the mind are derived from the workings of the physical brain. Little attention has been given to modelling consciousness within a mental monist framework, deriving the structure and dynamics of the mental world from primitive mental constituents only—with no neural substrate. Mental monism is gaining attention as a candidate solution to Chalmers’ Hard Problem on philosophical grounds, and it is therefore timely to examine possible formal models of consciousness within it. Here, I argue that the austere ontology of mental monism places certain constraints on possible models of consciousness, and propose a minimal set of hypotheses that a model of consciousness (within mental monism) should respect. From those hypotheses, it would be possible to construct many formal models that permit universal computation in the mental world, through cellular automata. We need further hypotheses to define transition rules for particular models, and I propose a transition rule with the unusual property of deep copying in the time dimension.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.