Abstract

The increasing use of Artificial Intelligence (AI) systems in face recognition and video processing in recent times creates higher stakes for their application in daily life. Increasingly, critical decisions are being made using these AI systems in application domains such as employment, finance, and crime prevention. These applications are done through the use of more abstract concepts such as emotions, trait evaluations (e.g., trustworthiness), and behavior (e.g., deception). These abstract concepts are learned by the AI system using the verbal and non-verbal cues from the human subject stimuli (e,g., facial expressions, movements, audio, text) for inference. Because the use of AI systems often happens in high stakes scenarios, it is of utmost importance that the AI system participating in the decision-making process is highly reliable and credible. In this paper, we specifically consider the feasibility of using such an AI system for deception detection. We examine if deception can be caught using multimodal aspects such as facial expressions and movements, audio cues, video cues, etc. We experiment using three different datasets with varying degrees of deception to explore the problem of deception detection. We also study state-of-the-art deception detection systems and investigate whether we can extend their algorithm into new datasets. We conclude that there is a lack of reasonable evidence that AI-based deception detection is generalizable over different scenarios of lying (lying deliberately, lying under duress, and lying through half-truths) and that in the future additional factors will need to be considered to make such a claim.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call