Abstract

High-stakes and high-volume English language proficiency tests typically rely on multiple-choice questions (MCQs) to assess reading and listening skills. Due to the Covid-19 pandemic, more institutions are using MCQs via online assessment platforms, which facilitate shuffling the order of options within test items to minimize cheating. There is scant research on the role that order and sequence of options plays in MCQs, so this study examined the results of a paper-based, high-stakes English proficiency test administered in two versions. Each version had identical three-option MCQs but with different ordering of options. The test-takers were chosen to ensure a very similar profile of language ability and level for the groups who took the two versions. The findings indicate that one in four questions exhibited significantly different levels of difficulty and discrimination between the two versions. The study identifies order dominance and sequence priming as two factors that influence the outcomes of MCQs, both of which can accentuate or diminish the power of attraction of the correct and incorrect options. These factors should be carefully considered when designing MCQs in high-stakes language proficiency tests and shuffling of options in either paper-based or computer-based testing.

Highlights

  • The assessment of learning, which is a notoriously ­time-consuming and challenging aspect of education in general, is even more problematic when the subject is learning a foreign language (Bachman et al 1996)

  • To determine precisely how well the exam grouping and booklet distribution provided balanced cohorts, we reviewed the sampling according to student performance in the English Towards Proficiency (ETP) course as well as the English Proficiency Exam (EPE) to make sure there were no coincidental sample variables that might interfere with the results

  • When all 74 multiplechoice questions (MCQs) are analyzed without differentiating the option patterns, the EASE index is virtually identical between TEST A and TEST B, as shown in Figure 9 below

Read more

Summary

Introduction

The assessment of learning, which is a notoriously ­time-consuming and challenging aspect of education in general, is even more problematic when the subject is learning a foreign language (Bachman et al 1996). Institutions can use third-party commercial testing services or develop their own in-house language proficiency tests. Commercial testing services offer computer-based testing, while in-house testing is normally paper-based, either hand-marked or scanned optic answer sheets. Despite concerns about washback (Messick 1996), many educational institutions use paper-based multiple-choice questions (MCQs) due to their reliability, validity, and ease of scoring. The Covid-19 pandemic of 2020 has meant that many institutions are converting paper-based tests to online testing platforms, one of the most common being the open-source learning management system (LMS) MOODLE. The quiz module of MOODLE defaults to shuffling options within multiple-choice questions to minimize cheating, so that each student sees a different order of options throughout all the items in one test. While there has been considerable research to inform test designers about the ideal number of options in one question, and the order of the questions within a test, there has been extraordinarily little research on the influence of the order and sequence of options within a test item as measured by classic test theory analysis

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call