Abstract

Position effects may occur in both paper–pencil tests and computerized assessments when examinees respond to the same test items located in different positions on the test. To examine position effects in large-scale assessments, previous studies often used multilevel item response models within the generalized linear mixed modeling framework. Using the equivalence of the item response theory and binary factor analysis frameworks when modeling dichotomous item responses, this study introduces a structural equation modeling (SEM) approach that is capable of estimating various types of position effects. Using real data from a large-scale reading assessment, the SEM approach is demonstrated for investigating form, passage position, and item position effects for reading items. The results from a simulation study are also presented to evaluate the accuracy of the SEM approach in detecting item position effects. The implications of using the SEM approach are discussed in the context of large-scale assessments.

Highlights

  • Large-scale assessments in education are typically administered using multiple test forms or booklets in which the same items are presented in different positions or locations within the forms

  • Type I error rate is the average proportion of replications for which the items with no position effects were falsely flagged for exhibiting a linear position effect at the α = .05 level

  • Item position effect, which often is viewed as a context effect in assessments (Brennan 1992; Weirich et al 2016), occurs when the difficulty or discrimination level of a test item varies depending on the location of the item on the test form

Read more

Summary

Introduction

Large-scale assessments in education are typically administered using multiple test forms or booklets in which the same items are presented in different positions or locations within the forms. The main purpose of this practice is to improve test security by reducing the possibility of cheating among test takers (Debeer and Janssen 2013). This practice helps test developers administer a greater number of field-test items embedded within multiple test forms. This is an effective practice for ensuring the integrity of the assessment, it may result in context effects—such as an item position effect—that can unwittingly influence the estimation of item parameters and the latent trait (Bulut 2015; Hohensinn et al 2011). Test takers may experience either increasing item difficulty at the end of the test due to fatigue or decreasing item difficulty due to test-wiseness as they become more familiar with the content (Hohensinn et al 2008).

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.