Abstract

Model evaluation – the process of making inferences about the performance of predictive models – is a critical component of predictive model-ing research in learning analytics. In this work, we present an overview of the state-of-the-practice of model evaluation in learning analytics, which overwhelmingly uses only na ̈ıve methods for model evaluation or, less commonly, statistical tests which are not appropriate for predictive model evaluation. We then provide an overview of more appropriate methods for model evaluation, presenting both frequentist and a preferred Bayesian method. Finally, we apply three methods – the na ̈ıve average commonly used in learning analytics, frequentist null hypothesis significance test(NHST), and hierarchical Bayesian model evaluation – to a large set ofMOOC data. We compare 96 different predictive modeling techniques,including different feature sets, statistical modeling algorithms, and tuning hyperparameters for each, using this case study to demonstrate the different experimental conclusions these evaluation techniques provide.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call