Abstract
Evaluating user experience (UX) is essential for optimizing educational chatbots to enhance learning outcomes and student productivity. This study introduces a novel weighted composite metric integrating interface usability assessment (via the Chatbot Usability Questionnaire, CUQ), engagement measurements (via the User Engagement Scale—Short Form, UES-SF), and objective performance indicators (through error rates and response times), addressing gaps in existing evaluation methods across interaction modes (text-based, menu-based, and hybrid) and question complexities. A 3 × 3 within-subject experimental design (n = 30) was conducted, measuring these distinct UX dimensions through standardized instruments and performance metrics, supplemented by qualitative feedback. Principal Component Analysis (PCA) was used to derive weights for the composite UX metric based on empirical patterns in user interactions. Repeated-measures ANOVA revealed that the hybrid interaction mode outperformed the others, achieving significantly higher usability (F(2,58) = 89.32, p < 0.001) and engagement (F(2,58) = 8.67, p < 0.001), with fewer errors and faster response times under complex query conditions. These findings demonstrate the hybrid mode’s adaptability across question complexities. The proposed framework establishes a standardized method for evaluating educational chatbots, providing actionable insights for interface optimization and sustainable learning tools.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have