Abstract

It is well-known that assigning different numerical weights to the best, or correct, responses to different items on objective tests does not appreciably change the order of total scores on the test. Less well understood is the effect of assigning different weights to “non-best,” or incorrect, responses. This paper illustrates how total score variance is increased by such differential response weighting. Three cases are discussed: (I) The case in which the best response to an item is assigned one weight and the non-best responses another weight, the two weights remaining constant from item to item; (II) The case in which each response option is assigned a different weight, the k weights remaining constant from item to item; (III) The case in which there are k response weights per item, but the weights vary from item to item. A consideration of these three cases demonstrates how the increase in total test score variance due to differential response weighting varies directly with the total number of items on the test, the number of response options per item, the square of the differential response weights, and the proportion of examinees selecting a particular response option.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call