Abstract

Lying state detection is a typical psychological calculation problem with significant spatiotemporal dynamic changes. However, most existing detection methods have not fully considered the dynamic changes in psychological states, and the distribution information of time series in multimodal spaces has not effectively utilized. The current detection systems lack adaptive fusion methods for multimodal features and it is difficult to extract their spatiotemporal dependencies. Therefore, a novel speech lie detection model was proposed that combines a Convolutional Neural Network (CNN) with a Bidirectional Long Short-Term Memory (BiLSTM) neural network and multimodal feature fusion of Spatiotemporal Attention Mechanism (SAM). CNN has the ability to extract local spatial features, while BiLSTM can handle long sequences and long-term dependencies in bidirectional information flows, and the contextual information in sequences can be captured. The proposed model combined the short-term stationary characteristics in time and the diversity of semantic environments in space, and introduced SAM to fuse multimodal features of temporal and spatial dependencies as feature vectors for the detection model of lying psychological states. The simulation experiment results on the Open-source Real Life Trial lie database show that the average lie detection rate reaches 88.09%. In general, the proposed speech lie detection model has a significant detection accuracy improvement compared to the existing lie detection models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.