Abstract

This paper explores the history of ELIZA, a computer programme approximating a Rogerian therapist, developed by Jospeh Weizenbaum at MIT in the 1970s, as an early AI experiment. ELIZA’s reception provoked Weizenbaum to re-appraise the relationship between ‘computer power and human reason’ and to attack the ‘powerful delusional thinking’ about computers and their intelligence that he understood to be widespread in the general public and also amongst experts. The root issue for Weizenbaum was whether human thought could be ‘entirely computable’ (reducible to logical formalism). This also provoked him to re-consider the nature of machine intelligence and to question the instantiation of its logics in the social world, which would come to operate, he said, as a ‘slow acting poison’. Exploring Weizenbaum’s 20th Century apostasy, in the light of ELIZA, illustrates ways in which contemporary anxieties and debates over machine smartness connect to earlier formations. In particular, this article argues that it is in its designation as a computational therapist that ELIZA is most significant today. ELIZA points towards a form of human–machine relationship now pervasive, a precursor of the ‘machinic therapeutic’ condition we find ourselves in, and thus speaks very directly to questions concerning modulation, autonomy, and the new behaviorism that are currently arising.

Highlights

  • This joining of an illicit metaphor to an ill-thought out idea breeds, and is perceived to legitimate, such perverse propositions as that, for example, a computer can be programmed to become an effective psychotherapist’ (Weizenbaum, 1976: 206).As artificial and human intelligences become more tightly enmeshed, long-standing questions around the application of machine logics to human affairs arise with new urgency

  • Behaviorist nudge-based (Sunstein and Thaler, 2008) social programmes launched by governments ‘for our own good’—and by corporations for theirs—instantiate a therapeutic relationship between

  • ELIZA is an early iteration of a form of human–machine relationship pervasive, a precursor of what I term the ‘machinic therapeutic’ condition we find ourselves in, and speaks very directly to questions concerning modulation, autonomy, and the new behaviorism that are currently arising

Read more

Summary

Introduction

This joining of an illicit metaphor to an ill-thought out idea breeds, and is perceived to legitimate, such perverse propositions as that, for example, a computer can be programmed to become an effective psychotherapist’ (Weizenbaum, 1976: 206). In ‘Computer Power and Human Reason’ (1976), the work for which he became best known, he called for limits on the expansion of computational logics and systems into human affairs He argued that the desires of the AI community to create intelligent artificial life, and to establish a more fully cybernetic society, were impossible to fully realize, and undesirable in any case. Reporting that some psychiatrists believed the DOCTOR computer program could grow into a ‘nearly completely automatic form of psychotherapy’, Weizenbaum lamented that specialists in human to human interaction could ‘...view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter’ (1976: 3) He concluded that such a response was only possible because the therapists concerned already had a view of the world as machine, and already thought of themselves as ‘information processor(s)’; the outlines of his engagement with behaviorism here begin to appear. Should be valued, he argued, but its nature should be more closely addressed

The critique of AI science
Alien logics as discursively prior
ELIZA the Rogerian machine?
Self‐actualization versus dronic de‐actualization
The droning of experience?
10 Remaining non‐inhuman beings?
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call