Abstract
AbstractIn this work, we created a new large-scale unconstrained high-quality Deepfake Image (DFIM-HQ) dataset containing 140K images. Compared to existing datasets, this dataset includes a variety of diverse scenarios, pose variations, high-quality degradations, and illumination variations, making it a particularly challenging dataset. Since computer vision models learn to perform a task by capturing relevant statistics from training data, they tend to learn spurious age, gender, and race correlations leading to learning biases. To account for AI bias in our proposed DFIM-HQ dataset, we design a simple yet effective image recognition benchmark for studying bias mitigation. Our detection system makes use of an Inception-based network to extract frame-level features and automatically detect manipulated content. We also propose an explainability framework that provides a better understanding of the model’s prediction. Such informed decisions provide insights that can be used to improve the model and, thereby, helps to add trust to the model. Our evaluation illustrates that our frameworks can achieve competitive results in detecting deepfake images using deep learning architectures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.