Abstract

Intelligent tutoring systems are effective for improving students’ learning outcomes (Pane et al. 2013; Koedinger and Anderson, International Journal of Artificial Intelligence in Education, 8, 1–14, 1997; Bowen et al. Journal of Policy Analysis and Management, 1, 94–111 2013). However, constructing tutoring systems that are pedagogically effective has been widely recognized as a challenging problem (Murray 2003; Murray, International Journal of Artificial Intelligence in Education, 10, 98–129, 1999). In this paper, we explore the use of computational models of apprentice learning, or computer models that learn interactively from examples and feedback, for authoring expert-models via demonstrations and feedback (Matsuda et al. International Journal of Artificial Intelligence in Education, 25(1), 1–34 2014) across a wide range of domains. To support these investigations, we present the Apprentice Learner Architecture, which posits the types of knowledge, performance, and learning components needed for apprentice learning. We use this architecture to create two models: the Decision Tree model, which non-incrementally learns skills, and the Trestle model, which instead learns incrementally. Both models draw on the same small set of prior knowledge (six operators and three types of relational knowledge) to support expert model authoring. Despite their limited prior knowledge, we demonstrate their use for efficiently authoring a novel experimental design tutor and show that they are capable of learning an expert model for seven additional tutoring systems that teach a wide range of knowledge types (associations, categories, and skills) across multiple domains (language, math, engineering, and science). This work shows that apprentice learner models are efficient for authoring tutors that would be difficult to build with existing non-programmer authoring approaches (e.g., experimental design or stoichiometry tutors). Further, we show that these models can be applied to author tutors across eight tutor domains even though they only have a small, fixed set of prior knowledge. This work lays the foundation for new interactive machine-learning based authoring paradigms that empower teachers and other non-programmers to build pedagogically effective educational technologies at scale.

Highlights

  • Intelligent tutoring systems have been shown to improve student learning across multiple domains (Beal et al 2007; Graesser et al 2001; Koedinger and Anderson 1997; Mitrovic et al 2002; Ritter et al 2007; VanLehn 2011), but designing and building tutoring systems that are pedagogically effective is difficult and expensive (Murray 2005)

  • We explore the use of computational models of apprentice learning, or computer models that learn interactively from examples and feedback, for authoring expert-models via demonstrations and feedback (Matsuda et al International Journal of Artificial Intelligence in Education, 25(1), 1–34 2014) across a wide range of domains

  • The current section investigates two questions: (1) is authoring with simulated students a viable approach when domain-specific knowledge is not available, and (2) how does the approach compare to Example-Tracing with mass production? To investigate these questions, we describe how to author a novel tutor for experimental design using both the DECISION TREE model and Example-Tracing, evaluate the efficiency of each approach

Read more

Summary

Introduction

Intelligent tutoring systems have been shown to improve student learning across multiple domains (Beal et al 2007; Graesser et al 2001; Koedinger and Anderson 1997; Mitrovic et al 2002; Ritter et al 2007; VanLehn 2011), but designing and building tutoring systems that are pedagogically effective is difficult and expensive (Murray 2005). Each phase of this design process requires time, expertise, and resources to execute properly, which, in general, makes tutor development a cost prohibitive endeavor (Murray 1999). Many researchers have created tools to support the tutor development process (Aleven et al 2006; Sottilare and Holden 2013; Murray 1999; 2003). While existing tutor authoring tools have been shown to reduce the expertise requirements and time needed to build a tutor (e.g., Example-Tracing has been shown to reduce authoring time by as much as four times, Aleven et al (2009)), they still struggle to support nonprogrammers trying to build tutors for complex domains (MacLellan et al 2015). How technology can support tutor authoring remains an open research question

Objectives
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call