Abstract

Artificial Intelligence, the study and emulation of intelligence, has brought forth a host of epistemological and ethical issues (Anderson 1964). In the wake of viable natural-language-understanding programs, the question, "What does it take to put together an understanding program?" has superseded the hypothetical query, "What if machines can think?" Philosophers who have been concerned with these issues have adopted two different points of view: (1) programs will never be able to emulate human intelligence, mainly because they lack con sciousness and intentionality (Scriven 1953; Dreyfus 1979; Searle 1980); (2) programs already manifest intelligent behavior in limited domains and are potentially able to deal with larger domains in due course (Hofstadter 1980, 680; Schank 1975, 4; Winograd 1972, 2). Although both sides agree that the computer is a useful tool for simulation of behavior, the argument ensues over the issue if programs can go beyond mere imitation of human behavior, if they can be creative, if they can think. This paper investigates these questions from a Wittgensteinian point of view. Wittgenstein's philosophy does not seem to be in the least amenable to "thinking" or "understanding" programs. It is centered around the human being, thus decidedly anthropocentric; along these lines, Witt genstein's denial of credibility of "thinking" programs is categorical: "But machines can't think" (PI, 360). Therefore, it is surprising that he is considered to be the forerunner of "the concerns of modern AI" (Wilks 1976, 222). Even more problematic is the use of quotes from Wittgenstein to support advocates of "strong" AI (i.e., people who believe that a program is a mind) (Searle 1980), as well as contenders

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call