Abstract

Conversational agents have been widely used in education to support student learning. There have been recent attempts to design and use conversational agents to conduct assessments (i.e., conversation-based assessments: CBA). In this study, we developed CBA with constructed and selected-response tests using Rasa—an artificial intelligence-based tool. CBA was deployed via Google Chat to support formative assessment. We evaluated (1) its performance in answering students’ responses and (2) its usability with cognitive walkthroughs conducted by external evaluators. CBA with constructed-response tests consistently matched student responses to the appropriate conversation paths in most cases. In comparison, CBA with selected-response tests demonstrated perfect accuracy between system design and implementation. A cognitive walkthrough of CBA showed its usability as well as several potential issues that could be improved. Participating students did not experience these issues, however, we reported them to help researchers, designers, and practitioners improve the assessment experience for students using CBA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call