Images generated by the Generative Adversarial Network (GAN) are too realistic to be distinguished by humans. Recently, some detection methods have been proposed to distinguish between generated and real images. However, these methods rely on specific detection techniques and can be easily detected by other types of detection methods. This study aims to investigates the security of the GAN-generated image detection method by devising a method to evade general detection. The features related and unrelated to differentiating between real and generated images are disentangled by a GAN model in our model. The unrelated features contain information about the image content, while the related feature provides useful information for identifying generated images. Our method then camouflages a generated image by using its unrelated features and the related features of real images. The main advantages of our model include its ability to generalize to different detectors and adapt to the prior information about detectors. Experimental results confirm the superior evasion capability of our proposed method compared to other detector-dependent and independent methods across different popular detection methods.
Read full abstract