Abstract

AbstractImagine advanced computers that could, by virtue merely of being programmed in the right ways, act, react, communicate, and otherwise behave like humans. Might such computers be capable of understanding, thinking, believing, and the like? The framework developed in this paper for tackling challenging questions of concept application (in any realm of discourse) answers in the affirmative, contrary to Searle’s famous ‘Chinese Room’ thought experiment, which purports to prove that ascribing such mental processes to computers like these would be necessarily incorrect. The paper begins by arguing that the core issue concerns language, specifically the discourse-community-guided mapping of phenomena onto linguistic categories. It then offers a model of how people adapt language to deal with novel states of affairs and thereby lend generality to their words, employing processes of assimilation, lexemic creation, and accommodation (in intersense and intrasense varieties). Attributions of understanding to some computers lie in the middle range on a spectrum of acceptability and are thus reasonable. Possible objections deriving from Searle’s writings require supplementing the model with distinctions between present and future acceptability, and between contemplated and uncontemplated word uses, as well as a literal-figurative distinction that is more sensitive than Searle’s to actual linguistic practice and the multiplicity of subsenses possible within a single literal sense. The paper then critiques two misleading rhetorical features of Searle’s Chinese Room presentation, and addresses a contemporary defense of Searle that seems to confront the sociolinguistic issue, but fails to allow for intrasense accommodation. It concludes with a brief consideration of the proper course for productive future discussion.

Highlights

  • Imagine advanced computers that could, by virtue merely of being programmed in the right ways, act, react, communicate, and otherwise behave like humans

  • I will develop a simple model of how language-users adapt language to deal with novel states of affairs and thereby lend generality to their words. This account will be used to show, pace Searle, that the computer-understanding question—not atypically for non-trivial questions of proper classification—is a normative and not descriptive one, and that there is nothing unreasonable about maintaining that computers with the right programs and input–output relations are capable of understanding and like mental processes

  • The focal point of the computer-understanding controversy has been Searle's (1980a) familiar Chinese Room thought experiment, which is directed against those who believe that computers that pass the Turing test—meaning that their behavior cannot be distinguished reliably from that of a normal human being—should be regarded as entities accurately describable with mentalistic terms like “thinking” and “understanding.”1 The thought experiment envisions a

Read more

Summary

Reorienting the computer understanding debate

Suppose that researchers develop computers whose communicative and other behaviors are indistinguishable from those of normal human beings. Classifications, and characterizations are precisely what are at issue, where the facts do not unambiguously prescribe the terms to be used in thinking and talking, philosophers’ investigations are of necessity partly linguistic, comprehending both world and words In such cases scholars are relying upon models of conceptual structure and change whether they realize it or not. I will develop a simple model of how language-users adapt language to deal with novel states of affairs and thereby lend generality to their words This account will be used to show, pace Searle, that the computer-understanding question—not atypically for non-trivial questions of proper classification—is a normative and not descriptive one, and that there is nothing unreasonable about maintaining that computers with the right programs and input–output relations are capable of understanding and like mental processes. The paper is not a sophisticated deployment of linguistics theory, as this is both unnecessary for the limited dialectical purposes and likely to add only length and distracting complexity to the analysis

The Chinese Room challenge is a normative linguistic one
Is talk of computer understanding acceptable?
26 See Searle 1983
Accommodation anxieties
Conclusion: the way forward
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call