The science of consciousness has made great strides in recent decades. However, the proliferation of competing theories makes it difficult to reach consensus about artificial consciousness. While for purely scientific purposes we might wish to adopt a ‘wait and see’ attitude, we may soon face practical and ethical questions about whether, for example, agents artificial systems are capable of suffering. Moreover, many of the methods used for assessing consciousness in humans and even non-human animals are not straightforwardly applicable to artificial systems. With these challenges in mind, I propose that we look for ecumenical heuristics for artificial consciousness to enable us to make tentative assessments of the likelihood of consciousness arising in different artificial systems. I argue that such heuristics should have three main features: they should be (i) intuitively plausible, (ii) theoretically-neutral, and (iii) scientifically tractable. I claim that the concept of general intelligence — understood as a capacity for robust, flexible, and integrated cognition and behavior — satisfies these criteria and may thus provide the basis for such a heuristic, allowing us to make initial cautious estimations of which artificial systems are most likely to be conscious.
Read full abstract