Abstract

As online courses become more common, practitioners are in need of clear guidance on how to translate best educational practices into web-based instruction. Moreover, student engagement is a pressing concern in online courses, which often have high levels of dropout. Our goals in this work were to experimentally study routine instructional design choices and to measure the effects of these choices on students’ subjective experiences (engagement, mind wandering, and interest) in addition to objective learning outcomes. Using randomized controlled trials, we studied the effect of varying instructional activities (namely, assessment and a step-through interactive) on participants’ learning and subjective experiences in a lesson drawn from an online immunology course. Participants were recruited from Amazon Mechanical Turk. Results showed that participants were more likely to drop out when they were in conditions that included assessment. Moreover, assessment with minimal feedback (correct answers only) led to the lowest subjective ratings of any experimental condition. Some of the negative effects of assessment were mitigated by the addition of assessment explanations or a summary interactive. We found no differences between the experimental conditions in learning outcomes, but we did find differences between groups in the accuracy of score predictions. Finally, prior knowledge and self-rated confusion were predictors of post-test scores. Using student behavior data from the same online immunology course, we corroborated the importance of assessment explanations. Our results have a clear implication for course developers: the addition of explanations to assessment questions is a simple way to improve online courses.

Highlights

  • Many researchers have evaluated different elements of computerized instruction using experimental and observational methods (e.g., Clark & Mayer, 2011; Szpunar, Khan, & Schacter, 2013; Türkay, 2016)

  • We focused on the impact of routine online course design choices by asking the following questions: 1. How does the addition of multiple choice or short-answer assessments between videos impact learning, persistence, and engagement in a fully online advanced science lesson?

  • Formative assessment was directly linked to attrition - a negative outcome that instructors want to avoid

Read more

Summary

Introduction

Many researchers have evaluated different elements of computerized instruction using experimental and observational methods (e.g., Clark & Mayer, 2011; Szpunar, Khan, & Schacter, 2013; Türkay, 2016). The current work is guided by experiments performed by Szpunar, Khan, and Schacter (2013) and Szpunar, Jing, and Schacter (2014) The results of these studies support the idea that the interleaving of assessments with short videos enhances learning while reducing mind wandering and overconfidence. In these studies, participants were tested immediately after instruction with the same test items as the study materials. In a similar study that included testing or a control (reading content-aligned statements), the effect of testing was not as dramatic (Kang, McDermott, & Roediger, 2007) This suggests that an indirect effect of testing may be targeted reexposure to the most important content. While the learning benefits of testing are well characterized, comparatively few studies have evaluated the impact of assessment in online courses on students’ subjective experiences

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call