As a novice in a particular domain, thecognitive feats that experts are capable ofperformingseemimpressive,evenextraor-dinary. According to the well-establishedexemplar theory of categorization (e.g.,Brooks, 1987; Medin and Ross, 1989), anew category member in everyday clas-sification (e.g., a bird, a table, or a car)or expert classification (e.g., an abnor-mal chest x-ray, a patient with myocardialischaemia, or a poor chess move) is cat-egorized on the basis of its similarity toindividual prior cases. Often this sensitiv-ity develops effortlessly and without anyintention to learn similarities or differ-ences among the exemplars.Experts can do a lot with a little. Acrossvariousdomainsofexpertise,itseemsthatexperts can perform quickly and accu-rately when given only a small amountof information, as in chess (Gobet andCharness, 2006); fireground command(Klein, 1998); radiology (Myles-Worsleyet al., 1988; Evans et al., 2013), and der-matology(Normanetal.,1989).Theexpe-riential knowledge based on the hundredsof thousands of prior instances serves as arich source of analogies to permit efficientproblem solving.A fruitful approach to understand-ing these cognitive feats has been tounderstand where expertise lies in var-ious domains. Expertise in ball sports,for example, seems to lie in anticipatingwhere the ball will be (Abernethy, 1991);expertise in wine seems to lie in apply-ing verbal labels (Hughson and Boakes,2001); expertise in radiology seems to liein rapid discrimination of normal andabnormalradiographs(Evansetal.,2013);and expertise in chess seems lie in rapidretrieval of board configurations frommemory (Chase and Simon, 1973).Over the last several years, we havebeen working with a fascinating groupof experts who spend several hours aday examining a highly structured set ofimpressions. When a fingerprint is foundat a crime scene it is a human exam-iner, not a machine, who is faced withthe task of identifying the person wholeft it. Professional fingerprint examin-ers are usually sworn police officers whouse image enhancement tools, such asPhotoshop or a physical magnifying glass,and database tools to provide a list ofpossible matching candidates. They placea crime scene print and a suspect printside-by-side—physically or on a computerscreen—andvisuallycomparetheprintstojudge whether the prints came from thesame person or two different people.These fingerprint examiners have tes-tified in court for over one hundredyears, but there have been few experi-ments directly investigating the extent towhich experts can correctly match fin-gerprints to one another, how competentandproficientfingerprintexpertsare,howexaminersmaketheirdecisions,orthefac-tors that affect performance (Loftus andCole, 2004; Saks and Koehler, 2005; Vokeyet al., 2009; Spinney, 2010b; Thompsonet al., 2013a). Indeed, many examinershave even claimed that fingerprint iden-tification is infallible (Federal Bureau ofInvestigation, 1984). Academics, judges,scientists, and US Senators have reportedon the absence of solid scientific prac-tices in the forensic sciences. They high-lighttheabsenceofexperimentsonhumanexpertise in forensic pattern matching,suggesting that faulty analyses may becontributing to wrongful convictions ofinnocent people (Edwards, 2009; NationalResearch Council, 2009; Campbell, 2011;Carle, 2011; Expert Working Group onHuman Factors in Latent Print Analysis,2012; Maxmen, 2012), and they lamentthe lack of a research culture in the foren-sic sciences (Mnookin et al., 2011). Thefield of forensics is, however, beginning toacknowledge the central role that falliblehumans play in the identification process(Tangen, 2013).Our first point of inquiry was to seewhether qualified, court practicing finger-print examiners are any more accuratethan the person on the street, and to get afeelforthekindsoferrorsexaminersmake.In our first experiment (Tangen et al.,2011), we tested the matching accuracyof fingerprint examiners from Australianstateandfederallawenforcementagencies.In a signal detection paradigm, we cre-ated ground-truth matching prints for useas targets, and highly-similar, nonmatch-ing prints from a national database searchfor use as distractors. We found that qual-ified, court-practicing fingerprint expertswere exceedingly accurate compared withnovices. Experts tended to err on the sideof caution by making more errors of thesort that could allow a guilty person toescape detection than errors of the sortthat could falsely incriminate an inno-cent person. A similar experiment, withparticipants from the US Federal Bureauof Investigation, produced similar results(Uleryetal.,2011),andafollow-upexper-iment found variability in the consistencywithin and between examiners’ decisions(Ulery et al., 2012). An examiner’s exper-tise seems to lie, not in matching prints
Read full abstract