Abstract

AbstractSearle's Chinese Room argument is a general argument that proves that machines do not have mental states in virtue of their programming. I claim that the argument expresses powerful but mistaken intuitions about understanding and the first person point of view. A distinction is drawn between a competence sense and a performance sense of ‘understanding texts’. It is argued that the Chinese Room intuition looks for a special experience (performance) of comprehension, whereas artificial intelligence is attempting to explain the knowledge (competence) required to understand texts. Moreover, a dilemma is sketched for the argument: either Searle hasn't identified the appropriate subject of understanding or he may understand after all. Finally, I question the underlying assumption that the general definition of mental states requires a projectable‐by‐us first person point of view.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call