Abstract

Abstract Lie detection research comparing manual and automated coding of linguistic cues is limited. In Experiment 1, we attempted to extend this line of research by directly comparing the veracity differences in manual coding and two coding software programs (Text Inspector and Linguistic Inquiry and Word Count [LIWC]) on the linguistic cue “total details” across eight published datasets. Mixed model analyses revealed that LIWC showed larger veracity differences in total details than Text Inspector and manual coding. Follow-up classification analyses showed that both automated coding and manual coding could accurately classify honest and false accounts. In Experiment 2, we examined if LIWC’s sensitivity to veracity differences was the result of honest accounts including more redundant (repeated) words than false accounts as LIWC—but not Text Inspector or manual coding—accounts for redundancy. Our prediction was supported, and the most redundant words were function words. The results implicated that automated coding can detect veracity differences in total details and redundancy, but it is not necessarily better than manual coding at accurately classifying honest and false accounts.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.