Abstract

Randomized evaluations of educational technology produce log data as a bi-product: highly granular data on student and teacher usage. These datasets could shed light on causal mechanisms, effect heterogeneity, or optimal use. However, there are methodological challenges: implementation is not randomized and is only defined for the treatment group, and log datasets have a complex structure. This article discusses three approaches to help surmount these issues. One approach uses data from the treatment group to estimate the effect of usage on outcomes in an observational study. Another, causal mediation analysis, estimates the role of usage in driving the overall effect. Finally, principal stratification estimates overall effects for groups of students with the same “potential” usage. We analyze hint data from an evaluation of the Cognitive Tutor Algebra I curriculum using these three approaches, with possibly conflicting results: the observational study and mediation analysis suggest that hints reduce posttest scores, while principal stratification finds that treatment effects may be correlated with higher rates of hint requests. We discuss these mixed conclusions and give broader methodological recommendations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call