Abstract
ObjectiveThere is a dearth of research into the quality of assessments based on Multiple Choice Question (MCQ) items in Massive Open Online Courses (MOOCs). This dataset was generated to determine whether MCQ item writing flaws existed in a selection of MOOC assessments, and to evaluate their prevalence if so. Hence, researchers reviewed MCQs from a sample of MOOCs, using an evaluation protocol derived from the medical health education literature, which has an extensive evidence-base with regard to writing quality MCQ items.Data descriptionThis dataset was collated from MCQ items in 18 MOOCs in the areas of medical health education, life sciences and computer science. Two researchers critically reviewed 204 questions using an evidence-based evaluation protocol. In the data presented, 50% of the MCQs (112) have one or more item writing flaw, while 28% of MCQs (57) contain two or more flaws. Thus, a majority of the MCQs in the dataset violate item-writing guidelines, which mirrors findings of previous research that examined rates of flaws in MCQs in traditional formal educational contexts.
Highlights
There is a dearth of research into the quality of assessments based on Multiple Choice Question (MCQ) items in Massive Open Online Courses (MOOCs)
Despite increasing debate about the potential for Massive Open Online Courses (MOOCs) to contribute to formal, accredited qualifications, there is an absence of research into the quality of their assessments, including those based on Multiple Choice Question (MCQ) items
This provided the motivation to undertake an exploratory study of a selection of MOOCs to determine the existence and prevalence of item writing flaws in their MCQs
Summary
There is a dearth of research into the quality of assessments based on Multiple Choice Question (MCQ) items in Massive Open Online Courses (MOOCs). Objective Despite increasing debate about the potential for Massive Open Online Courses (MOOCs) to contribute to formal, accredited qualifications, there is an absence of research into the quality of their assessments, including those based on Multiple Choice Question (MCQ) items. This provided the motivation to undertake an exploratory study of a selection of MOOCs to determine the existence and prevalence of item writing flaws in their MCQs. The full study and its findings are reported elsewhere [1, 2], but not the associated dataset provided here.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have