Abstract

Like corporations, governments around the world have adopted strategies for becoming leaders in the development and use of Artificial Intelligence, fostering environments congenial to AI innovators. However, in most cases, neither corporations nor policymakers have sufficiently addressed how the rights of children fit into their AI strategies or products. The role of artificial intelligence in children’s lives—from how children play, to how they are educated, to how they consume information and learn about the world—is expected to increase exponentially over the coming years. Thus, it’s imperative that stakeholders evaluate the risks and assess opportunities to use artificial intelligence to maximize children’s wellbeing in a thoughtful and systematic manner. This paper discusses AI and children's rights law and ethics in the context of 1) social media platforms such as YouTube; 2) smart toys; and 3) AI education and EdTech applications, including during COVID-19. The Hello Barbie, Cloud Pets, and Cayla smart toys case studies are analyzed, as well as the ElsaGate social media hacks, and education's new Intelligent Tutoring Systems and surveillance apps. Though AI has valuable benefits for children, it presents some particular challenges around important issues including child safety, privacy, data privacy, device security, surveillance, and consent. Ethics by design will continue to gain strength as a consideration throughout the development and use of AI systems, including systems designed for children’s and youth’s use. With respect to children, the Children's Rights by Design of AI systems (“CRbD”) standard2 is useful to employ against data-driven business models from AIEd that could exploit or otherwise harm children. COVID-19 has greatly exacerbated pre-existing EdTech risks. Overnight, education was forced to depend on technology, rather than simply utilize it to enable new teaching methods. During the spring of 2020 alone, schools in 192 countries were closed.UNESCO estimates support this assertion, stating that 91% of the world’s student population were out of school in April of 2020.2This has vaulted AI and EdTech from an incoming phenomenon to a virtual necessity as one of the core mediums for the delivery of education. Surveillance of children is another use of AI that is booming due to advance machine learning and deep learning techniques.Although some degree of surveillance advances security, surveillance poses risks to children. Surveillance also creates privacy, safety, bias, and security risks and, especially in education contexts, limit children’s ability and willingness to take risks and otherwise express themselves. This article maps the potential positive and negative uses of AI on children’s lives, in hopes to contribute to the conversation on developing a child rights-based framework for artificial intelligence that delineates rights and corresponding duties for governments, educators, developers, corporations, parents, and children around the world. Of growing significance alongside AI technological issues are those of ethics. AI is ideological.The concern about AI is not that it won't deliver on the promise held forth by its advocates but, rather, that it will, but without due consideration of ethical implications. There are assumptions embedded in the algorithms that will shape how education is realized, and if students do not fit that conceptual model, they will find themselves outside of the area where a human could apply human wisdom to alter or intervene an unjust outcome. Perhaps one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in education and everywhere else. The article concludes with some recommendations for corporations, parents, governments, and educators on Responsible AI development for children.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call