Abstract

AbstractThis paper examines Harold Garfinkel’s work with ELIZA and a related program LYRIC from 1967 to 1969. AI researchers have tended to treat successful human–machine interaction as if it relied primarily on non-human machine characteristics, and thus the often-reported attribution of human-like qualities to communication with computers has been criticized as a misperception—and humans who make such reports referred to as “deluded.” By contrast Garfinkel, building on two decades of prior research on information and communication, argued that the ELIZA and the LYRIC “chatbots” were achieving interactions that felt human to many users by exploiting human sense-making practices. In keeping with his long-term practice of using “trouble” as a way of discovering the taken-for-granted practices of human sense-making, Garfinkel designed scripts for ELIZA and LYRIC that he could disrupt in order to reveal how their success depended on human social practices. Hence, the announcement “Machine Down” by the chatbot was a desired result of Garfinkel’s interactions with it. This early (but largely unknown) research has implications not only for understanding contemporary AI chatbots, but also opens possibilities for respecifying current information systems design and computational practices to provide for the design of more flexible information objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call