Abstract

ABSTRACTRecent research has suggested that re-setting the standard for each administration of a small sample examination, in addition to the high cost, does not adequately maintain similar performance expectations year after year. Small-sample equating methods have shown promise with samples between 20 and 30. For groups that have fewer than 20 students, options are scarcer. This simulation study examined balanced and unbalanced designs across nine equating models including both classic equating models and small-sample models. The study also accounted for varying sample sizes, differences in form difficulty and candidate population, and the size of the anchor set. This study supports the use of nominal weights approaches in combination with either circle-arc or mean equating. Consistent with other research, this study found that the best ways to improve equating results are increases in sample size and/or the number of anchor items across the old and new forms. However, a testing program’s tolerance for reuse will influence the decision to pool administrations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.