Abstract

Machine learners trained on verbal justifications of recognition decisions reliably predict recognition accuracy. If these recognition language classifiers are recollection sensitive, they should generalize beyond the single-item, verbal recognition paradigms upon which they were trained. To test this, three classifiers were trained to distinguish justification language in three different single-item verbal recognition paradigms, learning to distinguish the language justifying hits from false alarms, high from medium confidence hits, and remember from know judgements. The resulting classifiers were then used to predictively score language justifying correct versus incorrect eyewitness lineup selections constituting a test of far transfer because of the differences in materials (faces vs. words), subject populations (undergraduate vs. online), testing procedures (single vs. multiple items), and test lengths (12 vs. hundreds of targets per subject) among others. All three classifiers reliably predicted eyewitness accuracy despite these differences. Additionally, mixed modeling demonstrated that the classifiers demonstrated both convergent and divergent validity with respect to the recollection sensitivity hypothesis. That is, they strongly predicted the accuracy of eyewitness selections (i.e., hits vs. false alarms) but failed to predict the accuracy of eyewitness rejections (i.e., correct rejections vs. misses). Moreover, one classifier was shown to predict eyewitness confidence despite being trained on a design devoid of all metacognitive judgments. These findings support the hypothesis that recognition language classifiers detect recollection conveyed in the language subjects use to justify their memory decisions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call