Abstract

Deep neural networks have revolutionised the research landscape of steganography. However, their potential has not been explored in invertible steganography, a special class of methods that permits the recovery of distorted objects due to steganographic perturbations to their pristine condition. In this paper, we revisit the regular-singular (RS) method and show that this elegant but obsolete invertible steganographic method can be reinvigorated and brought forwards to modern generation via neuralisation. Towards developing a renewed RS method, we introduce adversarial learning to capture the regularity of natural images automatically in contrast to handcrafted discrimination functions based on heuristic image prior. Specifically, we train generative adversarial networks (GANs) to predict bit-planes that have been used to carry hidden information. We then form a synthetic image and use it as a reference to provide guidance on data embedding and image recovery. Experimental results showed a significant improvement over the prior implementation of the RS method based on large-scale statistical evaluations.

Highlights

  • S TEGANOGRAPHY is the art and science of hiding information within a seemingly innocuous carrier or cover

  • As we looked back upon the history of invertible steganography, we found that the RS method offers an elegant framework, allowing us to infuse new life into it with deep learning technology, or to neuralise it

  • Experimental results validated the effectiveness of the proposed method and showed a significant performance boost

Read more

Summary

Introduction

S TEGANOGRAPHY is the art and science of hiding information within a seemingly innocuous carrier or cover. Most of the steganographic methods inevitably distort the cover objects with a small amount of noise as the price to pay for carrying hidden data. In today’s big data era, steganography, or more frequently addressed as watermarking, can be used as a means to help archiving data through inserting digital object identifier, digital signature or metadata, and facilitate verification of the authenticity when distributing the samples. Recent studies have shown that deep learning models can be susceptible to some deliberately crafted small noise called adversarial perturbations, causing the output to change drastically [15]–[19]. While no claim has been made that steganographic noise would to any extent poison and contaminate the dataset like specially engineered perturbations, it is desirable to undo the changes and recover an untainted clean copy of the samples for good measure, as the proverb goes, ‘a stitch in time may save nine’

Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.