Abstract

Existing steganographic methods based on adversarial images can only design adversarial images for a single steganalyzer and cannot resist detection from the latest steganalyzers using convolutional neural networks such as SRNet and Zhu-Net. To address this issue, this paper proposes a novel method for enhancing the security of image steganography using multiple adversarial networks and channel attention modules. The proposed method employs generative adversarial networks based on the U-Net structure to generate high-quality adversarial images and uses the self-learning properties of the adversarial networks to iteratively optimize the parameters of multiple adversarial steganographic networks. This process generates high-quality adversarial images capable of misleading multiple steganalyzers. Additionally, the proposed scheme adaptively adjusts the distribution of adversarial noise in the original image using multiple lightweight channel attention modules in the generator, thus enhancing the anti-steganalysis ability of adversarial images. Furthermore, the proposed method utilizes multiple discrimination losses and MSE loss, dynamically combined to improve the quality of adversarial images and facilitate the network's rapid and stable convergence. Extensive experimental results demonstrate that the proposed algorithm can generate adversarial images with an average PSNR of 40.3 dB, and the success rate of misleading the advanced steganalyzers is over 93%. The security and generalization of the algorithm we propose exceed those of the compared steganographic methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.