Abstract

ABSTRACT Rapid advancements in large language models makes child-safe design for their youngest users crucial. This article therefore offers child-centred AI design and policy recommendations to help make large language models (LLMs) utilised in conversational and generative AI systems safer for children. Conceptualising the risk of LLMs as ‘an empathy gap’, this research-based conceptual article focuses on the need to design LLMs that prevent or mitigate against the risks of responding inappropriately to children's personal disclosures or accidentally promoting harm. The article synthesises selected cases of human chatbot interaction and research findings across education, computer science and human-computer interaction studies. It concludes with practical recommendations for child-safe AI across eight dimensions of design and policy: content and communication; human intervention; transparency; accountability; justifiability; regulation; school-family engagement; and child-centred design methodologies. These eight dimensions are tailored to a variety of stakeholders, from policymakers and AI developers to educators and caregivers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call