Abstract

Image-to-image steganography refers to the practice of hiding secret image within a cover image, serving as a crucial technique for secure communication and data protection. Existing image-to-image generative adversarial network-based steganographic methods for image hiding demonstrate a high embedding capacity. However, there is still significant room for improvement in terms of the quality of the stego images and the extracted secret images. In this study, we propose an architecture for inconspicuously hiding an image within the Y channel of another image, leveraging a U-Net network and a multi-scale fusion ExtractionBlock. The network is jointly trained using a loss function combining Perceptual Path Length (PPL) and Mean Square Error (MSE). The proposed network is trained and tested on two datasets, Labeled Faces in the Wild and Pascal visual object classes. Experimental results demonstrate that the model not only achieves high invisibility and significant hiding capacity (8 bits per pixel) without altering the color information of the cover image but also exhibits strong generalization ability. Additionally, we introduce the Modified Multi-Image Similarity Metric (MMISM), which integrates the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Metric (SSIM) values of images, to comprehensively evaluate the network’s hiding and extraction capabilities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.