Abstract

BackgroundDeception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from different domains suggest that hybrid human-machine integrations could offer a viable path in detection tasks. MethodWe collected a corpus of truthful and deceptive answers about participants' autobiographical intentions (n = 1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful or deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition). ResultsThe data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to chance level. The hybrid-adjust condition did not improve deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect. ConclusionsThe current study does not support the notion that humans can meaningfully add the deception detection performance of a machine learning system. All data are available at https://osf.io/45z7e/.

Highlights

  • Determining who is lying and who is telling the truth is at the core of the legal system and has sparked the interest of the academic community for decades

  • While some approaches rely on physiological measure­ ments such as brain potential or skin conductance, others look at the verbal content (Oberlader et al, 2016) and the linguistic properties of statements made by liars and truth-tellers (e.g., Perez-Rosas & Mihalcea, 2014)

  • This paper aims to examine how computer-automated deception detection can be combined with human judgment in a setting of deceptive intentions

Read more

Summary

Introduction

Determining who is lying and who is telling the truth is at the core of the legal system and has sparked the interest of the academic community for decades. The aca­ demic research on deception detection has moved closer to the practi­ tioners’ needs of being able to assess whether someone might be a threat and might hold malicious intent Such an approach is proactive and in line with the crime prevention task of law enforcement. Border control settings or airport security control require the screening of vast amounts of people These contexts require approaches that are structurally different from those applied in murder investigations, for example (for a review on needs for large-scale deception detection methods, see Kleinberg et al, 2019b). Human judges were presented with the outcome of the automated credibility judgment of truthful or deceptive statements.

Methods
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.