Abstract

The science of consciousness has made great strides by focusing on the behavioural and neuronal correlates of experience. However, while such correlates are important for progress to occur, they are not enough if we are to understand even basic facts, for example, why the cerebral cortex gives rise to consciousness but the cerebellum does not, though it has even more neurons and appears to be just as complicated. Moreover, correlates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, preterm infants, non-mammalian species and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need not only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it. Integrated information theory (IIT) does so by starting from experience itself via five phenomenological axioms: intrinsic existence, composition, information, integration and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience (a quale), and a calculus to evaluate whether or not a particular physical system is conscious and of what. Moreover, IIT can explain a range of clinical and laboratory findings, makes a number of testable predictions and extrapolates to a number of problematic conditions. The theory holds that consciousness is a fundamental property possessed by physical systems having specific causal properties. It predicts that consciousness is graded, is common among biological organisms and can occur in some very simple systems. Conversely, it predicts that feed-forward networks, even complex ones, are not conscious, nor are aggregates such as groups of individuals or heaps of sand. Also, in sharp contrast to widespread functionalist beliefs, IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.

Highlights

  • The science of consciousness has made great strides by focusing on the behavioural and neuronal correlates of experience. While such correlates are important for progress to occur, they are not enough if we are to understand even basic facts, for example, why the cerebral cortex gives rise to consciousness but the cerebellum does not, though it has even more neurons and appears to be just as complicated

  • In sharp contrast to widespread functionalist beliefs, information theory (IIT) implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing

  • Should we attribute experience to all mammals,1 to all vertebrates, to invertebrates such as cephalopods and bees or even to all multicellular animals? What about cultured organoids that mimic the cellular organization of the developing human brain [8]? And what about the sophisticated machines that run software designed to substitute for conscious humans in many complicated tasks?

Read more

Summary

Integrated information theory

As we move away from people, BCC and NCC become progressively less helpful to establish the presence of consciousness. If the same neurons were not merely inactive, but pharmacologically or optogenetically inactivated, they would cease to contribute to consciousness: even though their actual state is the same, they would not specify a cause–effect repertoire, since they do not affect the probability of possible past and future states of the complex.12 Another counterintuitive prediction of IIT is that if the efficacy of the 200 million callosal fibres through which the two cerebral hemispheres communicate with each other were reduced progressively, there would be a moment at which, for a minimal change in the traffic of neural impulses across the callosum, there would be an all-or-none change in consciousness: experience would go from being a single one to suddenly splitting into two separate experiencing minds (one linguistically dominant), as we know to be the case with split-brain patients [101,102]. The more the postulates of IIT are validated in situations in which we are reasonably confident about whether and how consciousness changes, the more we can use the theory to extrapolate and make inferences about situations where we are less confident—brain-damaged patients, newborn babies, alien animals, complicated machines and other far-fetched scenarios, as we shall consider

Everywhere?
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call