Abstract

Most existing research works involving deep learning focus on performance improvement by developing new architectures or regularizers. However, in this paper we study the modeling of uncertainty in recurrent networks for the application of student response modeling, more specifically, knowledge tracing. Knowledge tracing is an application of time series machine learning. It consists of inferring the mastery level of a skill for a student as they navigate a question bank, thus adjusting curriculum for efficient learning. Deep Knowledge Tracing (DKT) takes the deep learning approach for knowledge tracing and has achieved better results than models like Bayesian Knowledge Tracing (BKT) and Performance Factor Analysis (PFA). However, the opaqueness of these deep knowledge tracing models also brings some criticisms. Providing an uncertainty score for each prediction helps mitigate this opaqueness. To investigate uncertainty modeling in DKT, we first examine a popular way of modeling data dependent uncertainties using Monte Carlo and show how it is insufficient to model variance in data. Second, we show how to incorporate sensible uncertainties by explicitly regularizing the cross entropy loss function. Third, we evaluate our method both in three different real datasets and in a more controlled way using synthetic data. Using synthetic data allows us to quantitatively understand the generated uncertainties. The results show that our method provides comparable results with standard deep knowledge tracing models as well as meaningful prediction uncertainties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call