Abstract

A primary challenge associated with the implementation of educational support systems is the establishment of student trust in the systems themselves. Trust is a critical factor in the acceptance and use of AI-enabled systems, as it reduces uncertainty and the perception of risk associated with new technology adoption. A literature review of existing studies on trust in AI-based systems is needed to provide a solid foundation for future studies. This research aims to identify gaps in the literature regarding the establishment of user trust in AI-based educational systems by exploring the criteria of trust and the challenges of building trust in AI systems. A narrative review of the literature is conducted to synthesize the findings of selected articles, covering (1) fundamental principles of trust and the process of establishing trust in non-human entities; (2) technical issues relating to explainable AI; (3) the utilization of explainable AI to facilitate decision-making; and (4) the use of AI systems in facilitating educational activities and its influence. This article summarizes trust criteria, including reliance, transparency, affectiveness, integrity, consistency, fairness, accountability, security, and usability. Building trust in AI systems involves addressing technical, ethical, and societal challenges to ensure the responsible and beneficial use of AI for individuals and society.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call