Abstract

Many recent studies have looked at the viability of applying recurrent neural networks (RNNs) to educational data. In most cases, this is done by comparing their performance to existing models in the artificial intelligence in education (AIED) and educational data mining (EDM) fields. While there is increasing evidence that, in many situations, RNN models can improve on the performance of these existing methods, in this work we take a different approach. Rather than directly comparing RNNs with other models, we are instead interested in the results when RNNs are combined with one of these existing models. In particular, we attempt to improve the performance of ALEKS (“A ssessment and LE arning in K nowledge S paces”), an adaptive learning and assessment system based on Knowledge Space Theory, through the use of RNN models. Using data from more than 1.4 million ALEKS assessments, we first build an RNN classifier that attempts to predict the final result of each assessment. After verifying the accuracy of these predictions, we develop our stopping algorithm, with the goal of improving the efficiency of the ALEKS assessment by reducing the total number of questions that are asked. Based on this stopping algorithm, we give a comprehensive analysis of the possible effects it would have on students. We show that the combination of an RNN with the ALEKS assessment can reduce the average assessment length by over 26%, while a high degree of accuracy is maintained.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.