Abstract

In online courses, discussion forums play a key role in enhancing student interaction with peers and instructors. Due to large enrolment sizes, instructors often struggle to respond to students in a timely manner. To address this problem, both traditional Machine Learning (ML) (e.g., Random Forest) and Deep Learning (DL) approaches have been applied to classify educational forum posts (e.g., those required urgent responses vs. that did not). However, there lacks an in-depth comparison between these two kinds of approaches. To better guide people to select an appropriate model, we aimed at providing a comparative study on the effectiveness of six frequently-used traditional ML and DL models across a total of seven different classification tasks centering around two datasets of educational forum posts. Through extensive evaluation, we showed that (i) the up-to-date DL approaches did not necessarily outperform traditional ML approaches; (ii) the performance gap between the two kinds of approaches can be up to 3.68% (measured in F1 score); (iii) the traditional ML approaches should be equipped with carefully-designed features, especially those of common importance across different classification tasks. Based on the derived findings, we further provided insights to help instructors and educators construct effective classifiers for characterizing educational forum discussions, which, ultimately, would enable them to provide students with timely and personalized learning support.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call