Abstract

In the domain of lie detection, a common challenge arises from the dissimilar distributions of training and testing datasets. This causes a model mismatch, leading to a performance decline of the pretrained deep learning model. To solve this problem, we propose a lie detection technique based on a domain adversarial neural network employing a dual-mode state feature. First, a deep learning neural network was used as a feature extractor to isolate speech and facial expression features exhibited by the liars. The data distributions of the source and target domain signals must be aligned. Second, a domain-antagonistic transfer-learning mechanism is introduced to build a neural network. The objective is to facilitate feature migration from the training to the testing domain, that is, the migration of lie-related features from the source to the target domain. This method results in improved lie detection accuracy. Simulations conducted on two professional lying databases with different distributions show the superiority of the detection rate of the proposed method compared to an unimodal feature detection algorithm. The maximum improvement in detection rate was 23.3% compared to the traditional neural network-based detection method. Therefore, the proposed method can learn features unrelated to domain categories, effectively mitigating the problem posed by different distributions in the training and testing of lying data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call