Abstract

Amazon’s home assistant, Echo, became a key piece of evidence in a 2015 murder case as the device was believed to be recording crucial conversation on the night of the victim’s death. In the ‘era of ubiquitous listening’, where devices constantly scan for user voice command to perform tasks, violation to privacy results from user’s response to smart technology. This exploratory paper examines behavioural vulnerabilities that are prone to exploitation in the adoption of speech-activated home assistants and considers the implications in terms of privacy challenges arising from mass adoption of the technology. Anthropomorphism is a behavioural trait that leads to the likelihood of speech-activated devices being exploited. It encompasses factors such as intonation cues, visual cues, convenience, and sociability. Habituation to the presence of speech-activated home assistants gives rise to challenges to user privacy and security. For practitioners, legal provision must be made to accommodate potentially ubiquitous speech-activated technology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call