Abstract

Personalised environments such as adaptive educational systems can be evaluated and compared using performance curves. Such summative studies are useful for determining whether or not new modifications enhance or degrade performance. Performance curves also have the potential to be utilised in formative studies that can shape adaptive model design at a much finer level of granularity. We describe the use of learning curves for evaluating personalised educational systems and outline some of the potential pitfalls and how they may be overcome. We then describe three studies in which we demonstrate how learning curves can be used to drive changes in the user model. First, we show how using learning curves for subsets of the domain model can yield insight into the appropriateness of the model’s structure. In the second study we use this method to experiment with model granularity. Finally, we use learning curves to analyse a large volume of user data to explore the feasibility of using them as a reliable method for fine-tuning a system’s model. The results of these experiments demonstrate the successful use of performance curves in formative studies of adaptive educational systems.

Highlights

  • Adaptive educational systems such as intelligent tutoring systems (ITS) have user modelling at their core

  • The student model is typically derived in some way from the domain model, e.g. an overlay, where the student model is considered a subset of the domain model, or a perturbation model, where it contains some representation of the student’s buggy concepts (Holt, Dubs et al, 1994)

  • We argued that by comparing the local slope of the curve at N=1, or initial learning rate, we are measuring the reduction in error at the beginning of the curve; this represents how much of the domain the student is learning in absolute terms and better represents what we would like to optimise, namely the learning realised after receiving feedback about a knowledge component just once

Read more

Summary

Introduction

Adaptive educational systems such as intelligent tutoring systems (ITS) have user modelling at their core. In the case of a simple task such as catching a ball, the performance measure (e.g. number of drops) is measured for the participants for each attempt, giving the proportion of balls dropped for the first throw, the second, etc This data can be plotted to see how, in general, the ability to catch a ball improves with practice. In both cases the likelihood of incorrectly applying a knowledge component is clearly decreasing with the number of opportunities to apply that component

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call