Abstract

MOOC platforms have seen significant membership growth in recent years. MOOCs are leading the education world that has been digitized, remote, and highly competitive, and the competition is intense in the MOOC world. Based on the observation for top-rated MOOCs, this study proposes a research question, “What makes a great MOOC? What makes a hit?” To explore the answers, this study applies a crowdsourcing approach and interprets the semantics of reviews for the top-rated courses on Coursera.org. The paper has multiple steps and findings relevant to MOOC programs at universities worldwide. First, through exploratory analysis of learner reviews and expert judgment, this study identifies two distinct course categories focusing on learners' outcome intent, namely knowledge-seeking MOOCs and skill-seeking MOOCs. Further, this study uses a topical ontology of keywords and sentiment techniques to derive the intent of learners based on their comments. Through sentiment analysis and correlation analysis, it shows that knowledge-seeking MOOCs are driven by the quality of course design and materials. Skill-seeking MOOCs are driven by the instructor and their ability to present lectures and integrate course materials and assignments. This crowdsourcing method obtains the insights from large samples of learners’ reviews without the priming or self-selection biases of open surveys or interviews. The findings demonstrate the effectiveness of leveraging online learner reviews and offer practical implications for what truly “makes a hit” for top-rated MOOCs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call