Abstract

Nations are becoming increasingly sensitive about securing their borders and ensuring that those who infiltrate do not intend to do harmful things in the country. This has led border security organizations such as the Department of Homeland Security (DHS) to investigate better and faster technologies to screen border crossers (Nunamaker et al., 2011). Some of the screening technologies being investigated involve the analysis of the words spoken by border crossers during interviews. Researchers believe that by analyzing the words chosen by interviewees, algorithms can be developed to detect deception (Burgoon et al., 2014).One of the problems with linguistics of deception research is that there are inconsistent findings regarding which cues are indicative of deception and truth-telling (e.g. Bond Jr. and Depaulo, 2006; Zhou et al., 2004). Because of this problem, linguistic analysis systems are limited in their usefulness in detecting deception. A comprehensive systematic review of the linguistics of deception needs to be performed to create a theoretical model of the constructs relating to the linguistics of deception and to better inform DHS and similar organizations on how to best implement linguistic systems on national borders.This chapter will perform a meta-analysis on linguistic cues of deception to (1) inform linguistic theory, (2) enhance linguistic analysis systems, and (3) inform national security agencies on the linguistics of deception. Linguistic analysis is used by researchers and practitioners to find interesting patterns in communications, such as customer support chats and police investigations (Zhou et al., 2004). Linguistic analysis systems analyze textual language and synthesize words into linguistic features to represent cognitive or emotional states (Moffitt et al., 2012). Linguistic analysis systems, such as Linguistic Inquiry and Word Count (LIWC) (Tausczik and Pennebaker, 2009), General Architecture for Text Engineering (GATE) (Gaizauskas et al., 1996), Structured Programming for Linguistic Cue Extraction (SPLICE) (Moffitt et al., 2012), and Agent99 (Cao et al., 2003) are becoming increasingly popular in research to quantify linguistic features and to detect cognitive or emotional states of communicators that are present in the data. There are hundreds of different cues that these systems catalog, ranging from simple counts of pronouns to calculations of the expressiveness of the communicator. While most single studies that use these systems perform exploratory factor analyses to group the linguistic cues, these studies usually produce different groupings of cues that cause division in the sciences and unclear directions for practitioners (e.g. Bond Jr. and Depaulo, 2006; Zhou et al., 2004). Therefore, linguistic systems can be enhanced by solidifying the correct groups of cues based on a broad set of studies. The meta-analysis in this research will group linguistic features into more appropriate and empirically-driven categories of cognitive and emotional states.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call