Abstract

Recent work highlights the ability of verbal machine learning classifiers to distinguish between accurate and inaccurate recognition memory decisions (Dobbins, 2022; Dobbins & Kantner, 2019; Seale-Carlisle, Grabman, & Dodson, 2022). Given the surge of interest in these modeling techniques, there is an urgent need to investigate verbal classifiers' limitations – particularly in applied contexts such as when police collect eyewitness's confidence statements. We find that confirmatory feedback (e.g., “This study now has a total of 87 participants, 84 of them made the same decision as you!”) weakens the relationship between identification accuracy and verbal classifier scores to a similar degree as mock witnesses' numeric confidence judgments (Experiment 1). Crucially, for the first time, we compare the discriminative value of verbal classifier scores to the ratings of human evaluators who assessed the identical verbal confidence statements (Experiment 2). Our results suggest that human evaluators outperform the classifier when mock witnesses received no feedback; however, the classifier matches (or exceeds) the performance of human evaluators when mock witnesses received confirmatory feedback. Providing lineup information to human evaluators resulted in a worse ability to distinguish between correct and filler identifications, suggesting that this particular information may encourage the use of inappropriate heuristics when rendering accuracy judgments. Overall, these results suggest that the utility of verbal classifiers may be enhanced when contextual effects (e.g., lineup presence) impair human estimates of others' performance, but that translating witnesses' statements into classifier scores will not fix the problems of an improperly conducted lineup procedure.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call