Abstract

As artificial intelligence systems increasingly make high-stakes recommendations and decisions automatically in many facets of our lives, the use of explainable artificial intelligence to inform stakeholders about the reasons behind such systems has been gaining much attention in a wide range of fields, including education. Also, in the field of education there has been a long history of research into self-explanation, where students explain the process of their answers. This has been recognized as a beneficial intervention to promote metacognitive skills, however, there is also unexplored potential to gain insight into the problems that learners experience due to inadequate prerequisite knowledge and skills that are required, or in the process of their application to the task at hand. While this aspect of self-explanation has been of interest to teachers, there is little research into the use of such information to inform educational AI systems. In this paper, we propose a system in which both students and the AI system explain to each other their reasons behind decisions that were made, such as: self-explanation of student cognition during the answering process, and explanation of recommendations based on internal mechanizes and other abstract representations of model algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.