Abstract

The CAPTCHA (Completely Automated Turing test to tell Computers and Humans Apart) is found throughout many websites. By challenging users to read a line of scrambled letters, identify crosswalks in an image, or some complete another task that is difficult for a computer to do, but comparatively trivial for a human user, a CAPTCHA verifies that the user is an actual human, and not software meant to interact maliciously with the website. This mundane and easily overlooked interface element is an important site where corporate interests and priorities act upon the people who encounter it. CAPTCHAs operate under the assumptions that difference can be detected and that it should be enforced. Because not all humans are able to solve a CAPTCHA, the test additionally enforces a boundary between humans and users. In this article, I analyze the discourses of Google’s reCAPTCHA and argue that this common interface element is a multi-faceted site of production where user labor is extracted every time they solve a reCAPTCHA. The products of this labor are threefold: (1) spam reduction, (2) artificial intelligence and machine learning training data and (3) an ideal of a normative web user. This last product is often overlooked but has wide-reaching implications. Users who solve reCAPTCHAs are producers but simultaneously are produced as users by the reCAPTCHA. The only humans who qualify as ‘authentic’ users are those who can perform this productive labor. Because Google’s reCAPTCHA operates as a site of invisible digital labor, this article works toward making such labor more visible so that users can become more aware of the work they are being asked to perform, and to what ends.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call