Abstract

In achievement testing, examinees usually have to mark off the selected answers to multiple-choice questions on an answer sheet. With the use of the item-sampling model of criterion-referenced measurement, the influence of erring in marking off on expected item scores is investigated. It is found that, in general, this type of erring lowers expected item scores and that the magnitude of this decrease is substantial enough to deserve some attention.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call