ABSTRACT Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online education. In this context, this study investigates multiple-answer questions (MAQs) versus traditional single-answer questions (SAQs) in online higher-education assessments. A mixed-methods study involving quantitative field experiments and qualitative interviews was conducted with students enrolled in an online Marketing M.Sc. program. The students were divided randomly and assessed using either SAQs or MAQs, and the impacts on test performance of variables such as grade averages, study times, perceived workload, and difficulty were evaluated using independent-sample t-tests and ordinary least-squares regression analysis. The results show that although grades were lower and MAQs were perceived as being more difficult, study times and perceived workload did not differ significantly between the two formats. These findings suggest that despite their challenge, MAQs can promote deeper understanding and greater learning retention. Furthermore, even with their higher perceived difficulty and impact on performance, MAQs hold potential for dealing with academic-integrity concerns related to artificial intelligence.
Read full abstract