Abstract

With the advancement of deepfake technology, particularly in the audio domain, there is an imperative need for robust detection mechanisms to maintain digital security and integrity. Our research presents an "Enhanced Deepfake Audio Detection with Spectro-Temporal Deep Learning Approach," aimed at addressing the escalating challenge of distinguishing genuine audio from sophisticated deepfake manipulations. Our methodology leverages the ADD2022 dataset, which encompasses a wide range of audio clips, to train and evaluate a novel deep learning model. The core of our approach consists of a meticulous data preprocessing phase, where audio samples are resampled, normalized, and subjected to silence removal to ensure uniformity and enhance model input quality. Our deep learning model architecture innovatively combines Convolutional Neural Networks (CNNs) for extracting spectral features and Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks for analyzing temporal dynamics. This hybrid model is adept at identifying the nuanced anomalies characteristic of deepfake audios. The model undergoes rigorous training and optimization, with performance evaluated against key metrics such as accuracy, precision, F1, recall score, and the Equal Error Rate (EER). Our findings exhibit the Enhanced Deepfake Audio Detection model's superior capability to discern deepfake audio, highlighting its potential as a pivotal tool in the fight against digital audio manipulation. This research contributes significantly to the field of digital forensics, offering a scalable and effective solution for ensuring the authenticity of audio content in an era dominated by deepfake technology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call