Abstract

Without modification, generative steganography is more secure than modification-based steganography. However, existing generative steganography methods still have the limitations, such as low embedding capacity and poor quality. To solve these issues, a synthesis-based generative steganographic model is proposed in this paper. In the image synthesis task, guidance features are utilized to synthesize images with specific styles and attributes. Due to the consistency of the guidance features before and after image synthesis, the features can be used as cover for steganography. The proposed model adopts the mean and standard deviation to quantify the distribution of guidance features, enabling the secret hiding within different trends of the feature distribution. By controlling the statistical dispersion of the embedded guidance features through the mean and standard deviation, the original feature distribution is preserved, and the synthesized image maintains good generation quality. The space of guidance features contains styles and attribute descriptions of various images, and offering a large space for information hiding. According to the experimental results, compared with existing steganography, the proposed steganographic model achieves better quality and hidden capacity with strong robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call