Abstract

Throughout the world, cut scores are an important aspect of a high-stakes testing program because they are a key operational component of the interpretation of test scores. One method for setting standards that is prevalent in educational testing programs—the Bookmark method—is intended to be a less cognitively complex alternative to methods such as the modified Angoff (1971) approach. In this study, we explored that assertion for a licensure examination program where two independent panels applied the Bookmark method to recommend a cut score on its Written Exam. One panel initially made their ratings using an ordered item booklet (OIB) in which items were randomly ordered with respect to empirically estimated difficulty followed by judgments on a correctly ordered OIB. A second panel applied the Bookmark process with only the correctly ordered OIB. Results revealed striking similarities among judgments, calling into question panelists’ ability to appropriately engage in the Bookmark method. In addition, under the random-ordering condition, approximately one-third of the panelists placed their bookmarks in a manner inconsistent with the new item difficulties. Implications of these results for the Bookmark standard setting method are also discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.