Abstract

Generative adversarial network (GAN)-generated image detection is a crucial yet challenging problem. The advancement of diverse generative models makes it simpler to build realistic synthetic images, but it also increases the likelihood of using images maliciously. Recent techniques of fake image detection often overfit to specific GANs, incapable of generalizing to other GANs. Each GAN leaves its distinct fingerprint on the images it generates. We propose a simple and compact attention network that exploits high-frequency and noise residual patterns of images. We design two-channel image representations in the preprocessing phase, with one channel using a bilateral high pass filter (BiHPF) for high-frequency patterns and the other using photo response non-uniformity (PRNU) for noise residuals capturing from GAN-generated and real images. The feature maps pass through the attention network with a dense layer for classification tasks. We assess the generalizability and robustness of the model through extensive experiments. The model is evaluated on a benchmark dataset in various settings, including cross-category, cross-GAN models with varying class settings and different color manipulations. This study demonstrates that the proposed model outperforms state-of-the-art (SOTA) methods by approximately 5% in different experiment settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call