Abstract

To the Editor, We thank professors Scurich, Morrison, Sinha and Mr Gutierrez for their thoughtful comments on our article (Arkes and Koehler, 2021). We agree with Scurich (2023) that when an examiner knows that he or she is being tested, the results of such a test are highly suspect. If an examiner can avoid making errors by deeming a comparison to be inconclusive and if inconclusives are never deemed to be indicative of an error, then a ‘strategic’ examiner can inflate accuracy levels by rendering an inconclusive decision for any difficult test. Such a test will not provide an unbiased measure of an examiner’s accuracy. But this is not reason enough to change the way accuracy is measured. In support of a different view on the matter, Scurich (2023) provides an analogy offered by Kaye et al. (2022) in which a student answers ‘I don’t know’ to a true–false test question. If the student should know the answer, then Kaye et al. (2022) say the ‘I don’t know’ answer should be counted as an error. We don’t think that Kaye’s analogy is apt in this context. Teachers are charged with determining whether a student should know the answer to a question. For example, if the topic was covered in the required reading or in a lecture, then the student should know the answer. In a forensic test, one cannot say whether the examiner should know the answer. Dror and Scurich (2020) suggested strategies that might help one determine whether the examiner should know the answer, but in our response to Dror and Scurich (Arkes and Koehler, 2022) we offered reasons why we think those strategies are inadequate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call