Abstract
Detecting deception has significant implications in fields like law enforcement and security. This research aims to develop an effective lie detection system using Electroencephalography (EEG), which measures the brain's electrical activity to capture neural patterns associated with deceptive behavior. Using the Muse II headband, we obtained EEG data across 5 channels from 34 participants aged 16-25, comprising 32 males and 2 females, with backgrounds as high school students, undergraduates, and employees. EEG data collection took place in a suitable environment, characterized by a comfortable and interference-free setting optimized for interviews. The research contribution is the creation of a lie detection dataset and the development of an autoencoder model for feature extraction and a deep neural network for classification. Data preparation involved several pre-processing steps: converting microvolts to volts, filtering with a band-pass filter (3-30Hz), STFT transformation with a 256 data window and 128 overlap, data normalization using z-score, and generating spectrograms from power density spectra below 60Hz. Feature extraction was performed using an autoencoder, followed by classification with a deep neural network. Methods included testing three autoencoder models with varying latent space sizes and two types of classifiers: three new deep neural network models, including LSTM, and six models using pre-trained ResNet50 and EfficientNetV2-S, some with attention layers. Data was split into 75% for training, 10% for validation, and 15% for testing. Results showed that the best model, using autoencoder with latent space size of 64x10x51 and classifier using the pre-trained EfficientNetV2-S, achieved 97% accuracy on the training set, 72% on the validation set, and 71% on the testing set. Testing data resulted in an F1-score of 0.73, accuracy of 0.71, precision of 0.68, and recall of 0.78. The novelty of this research includes the use of a cost-effective EEG reader with minimal electrodes, exploration of single and 3-dimensional autoencoders, and both non-pretrained classifiers (LSTM, 2D convolution, and fully connected layers) and pretrained models incorporating attention layers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Robotics and Control Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.