In the era of AIGC, the fast development of visual content generation technologies, such as diffusion models, brings potential security risks to our society. Existing generated image detection methods suffer from performance drops when faced with out-of-domain generators and image scenes. To relieve this problem, we propose Artifact Purification Network (APN) to facilitate the artifact extraction from generated images through the explicit and implicit purification processes. For the explicit one, a suspicious frequency-band proposal method and a spatial feature decomposition method are proposed to extract artifact-related features. For the implicit one, a training strategy based on mutual information estimation is proposed to further purify the artifact-related features. The experiments are conducted in two settings. Firstly, we perform a cross-generator evaluation, wherein detectors trained using data from one generator are evaluated on data generated by other generators. Secondly, we conduct a cross-scene evaluation, wherein detectors trained for a specific domain of content (e.g., ImageNet) are assessed on data collected from another domain (e.g., LSUN-Bedroom). Results show that for cross-generator detection, the average accuracy of APN is 5.6%∼16.4% higher than the previous 11 methods on the GenImage dataset and 1.7%∼50.1% on the DiffusionForensics dataset. For cross-scene detection, APN maintains its high performance. Via visualization analysis, we find that the proposed method can extract diverse forgery patterns and condense the forgery information diluted in irrelated features. We also find that the artifact features APN focuses on across generators and scenes are global and diverse. The code will be available at https://github.com/RichardSunnyMeng/APN-official-codes.
Read full abstract