Abstract

ABSTRACT Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online education. In this context, this study investigates multiple-answer questions (MAQs) versus traditional single-answer questions (SAQs) in online higher-education assessments. A mixed-methods study involving quantitative field experiments and qualitative interviews was conducted with students enrolled in an online Marketing M.Sc. program. The students were divided randomly and assessed using either SAQs or MAQs, and the impacts on test performance of variables such as grade averages, study times, perceived workload, and difficulty were evaluated using independent-sample t-tests and ordinary least-squares regression analysis. The results show that although grades were lower and MAQs were perceived as being more difficult, study times and perceived workload did not differ significantly between the two formats. These findings suggest that despite their challenge, MAQs can promote deeper understanding and greater learning retention. Furthermore, even with their higher perceived difficulty and impact on performance, MAQs hold potential for dealing with academic-integrity concerns related to artificial intelligence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.