Abstract

AbstractThis article advocates for a hermeneutic model for children-AI (age group 7–11 years) interactions in which the desirable purpose of children’s interaction with artificial intelligence (AI) systems is children's growth. The article perceives AI systems with machine-learning components as having a recursive element when interacting with children. They can learn from an encounter with children and incorporate data from interaction, not only from prior programming. Given the purpose of growth and this recursive element of AI, the article argues for distinguishing the interpretation of bias within the artificial intelligence (AI) ethics and responsible AI discourse. Interpreting bias as a preference and distinguishing between positive (pro-diversity) and negative (discriminative) bias is needed as this would serve children's healthy psychological and moral development. The human-centric AI discourse advocates for an alignment of capacities of humans and capabilities of machines by a focus both on the purpose of humans and on the purpose of machines for humans. The emphasis on mitigating negative biases through data protection, AI law, and certain value-sensitive design frameworks demonstrates that the purpose of the machine for humans is prioritized over the purpose of humans. These top–down frameworks often narrow down the purpose of machines to do-no-harm and they miss accounting for the bottom-up views and developmental needs of children. Therefore, applying a growth model for children-AI interactions that incorporates learning from negative AI-mediated biases and amplifying positive ones would positively benefit children’s development and children-centric AI innovation. Consequently, the article explores: What challenges arise from mitigating negative biases and amplifying positive biases in children-AI interactions and how can a growth model address these? To answer this, the article recommends applying a growth model in open AI co-creational spaces with and for children. In such spaces human–machine and human–human value alignment methods can be collectively applied in such a manner that children can (1) become sensitized toward the effects of AI-mediated negative biases on themselves and others; (2) enable children to appropriate and imbue top-down values of diversity, and non-discrimination with their meanings; (3) enforce children’s right to identity and non-discrimination; (4) guide children in developing an inclusive mindset; (5) inform top-down normative AI frameworks by children’s bottom-up views; (6) contribute to design criteria for children-centric AI. Applying such methods under a growth model in AI co-creational spaces with children could yield an inclusive co-evolution between responsible young humans in the loop and children-centric AI systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call