Abstract

This article examines the social acceptability and governance of emotional artificial intelligence (emotional AI) in children’s toys and other child-oriented devices. To explore this, it conducts interviews with stakeholders with a professional interest in emotional AI, toys, children and policy to consider implications of the usage of emotional AI in children’s toys and services. It also conducts a demographically representative UK national survey to ascertain parental perspectives on networked toys that utilise data about emotions. The article highlights disquiet about the evolution of generational unfairness, that encompasses injustices regarding the datafication of childhood, manipulation, parental vulnerability, synthetic personalities, child and parental media literacy, and need for improved governance. It concludes with practical recommendations for regulators and the toy industry.

Highlights

  • Against the context of increasing datafication of children (Lupton and Williamson, 2017) and surveillance capitalism (Zuboff, 2019), this article investigates children’s toys and services that make use of emotional AI

  • On the basis of rising interest in the use of emotional AI and biosensing across diverse life domains (McStay, 2018), growth in networked objects, and the long history of toys and care (Turkle, 2011), it is likely that emotion and affect-based entanglement between children, networked objects and AI-based playthings is picking up speed

  • Against a context of literature on dataveillance and children, connected and smart toys, emotional AI and criticism of affective computing, the leading theme to emerge from interviewee concerns is generational unfairness

Read more

Summary

Introduction

Against the context of increasing datafication of children (Lupton and Williamson, 2017) and surveillance capitalism (Zuboff, 2019), this article investigates children’s toys and services that make use of emotional AI. The 2010s saw the emergence of ‘connected toys’ that rely on the internet, Wi-Fi and Bluetooth and ‘smart toys’ defined by sensors, voice and/or image recognition software, self-learning algorithms, scope for interaction with children and the creation of relatively easy-to-use control software (Holloway and Green, 2016; Winfield, 2012) These smart and connected toys raised security concerns (Chaudron et al, 2019), and apprehension of deception in child relationships with AI systems (Jones and Meurer, 2016). To explore issues arising from analysis of the interview data, we sought insights from parents regarding: (1) acceptability of emotoys and use of emotional AI in child-focused technologies; and (2) by what terms use of these intimate technologies in toys should be governed. Absence of intersubjective sensitivities avoided interviewer bias, a common problem with ethical and privacy-related research (Zureik and Stalker, 2010)

Key findings
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.