This study explores the potential of dynamic, machine learning (ML)-based modeling to enhance students’ argumentation skills—a crucial component in education and professional success. Traditional educational tools often rely on static modeling, which does not adapt to individual learner needs or provide real-time feedback. In contrast, our research introduces an innovative ML-based system designed to offer dynamic, personalized feedback on argumentation skills. We conducted three empirical studies comparing this system against traditional methods such as scripted and adaptive support modeling. Our results show that dynamic behavioral modeling significantly improves learners’ objective argumentation skills across domains, outperforming all established methods. The results further indicate that, compared with adaptive support, the effect of the dynamic modeling approach holds across complex (large effect) and simple tasks (medium effect) and supports learners with lower and higher expertise alike. This research has important implications for educational policy and practice; incorporating such dynamic systems could transform learning environments by providing scalable, individualized support. This would not only foster essential skills but also cater to diverse learner profiles, potentially reducing educational disparities. Our work suggests a shift toward integrating more adaptive technologies in educational settings to better prepare students for the demands of the modern workforce.
Read full abstract