Abstract

Consciousness and intelligence are properties that can be misunderstood as necessarily dependent. The term artificial intelligence and the kind of problems it managed to solve in recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell’s analogy, if a machine can do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights than a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe the obvious answer is no, as problem-solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness, at least, cannot be modeled by computational intelligence and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence than human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals, and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals, and machines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call