Abstract

ABSTRACT This paper examines the aurality of voice-activated AIs (VA AIs) through uncanny encounters with the devices. Glitches such as Alexa spontaneously bursting into laughter or accidental activations putting users’ privacy at risk, have incited suspicions among users as to the motives of tech companies and the technical capabilities of their devices. Building on previous contributions that show that the feminised voices of VA AIs are strategically designed to display reassuring attributes and obscure surveillance practices, I discuss the aural moments in which VA AIs fail to reassure, shifting from a convenience to a threat through the experience of the uncanny. I argue that anxieties tied to VA AIs are both produced and mediated by their aurality: both their voices and listening practices. I theorise uncanny encounters with the voice and listening capacities of VA AIs such as glitches, features which seek to imitate humans, disembodied voices and disembodied listening, and invasions of privacy as enacting perversions of care and inducing fear of impersonation and intrusion. This paper contributes to the literature on the specificity of sound in conceptions of the uncanny valley, and also seeks to enrich conception of vocality and listening as vector of anxieties within the neoliberal condition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call