Abstract
Randomized evaluations of educational technology produce log data as a bi-product: highly granular data on student and teacher usage. These datasets could shed light on causal mechanisms, effect heterogeneity, or optimal use. However, there are methodological challenges: implementation is not randomized and is only defined for the treatment group, and log datasets have a complex structure. This article discusses three approaches to help surmount these issues. One approach uses data from the treatment group to estimate the effect of usage on outcomes in an observational study. Another, causal mediation analysis, estimates the role of usage in driving the overall effect. Finally, principal stratification estimates overall effects for groups of students with the same “potential” usage. We analyze hint data from an evaluation of the Cognitive Tutor Algebra I curriculum using these three approaches, with possibly conflicting results: the observational study and mediation analysis suggest that hints reduce posttest scores, while principal stratification finds that treatment effects may be correlated with higher rates of hint requests. We discuss these mixed conclusions and give broader methodological recommendations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.