Abstract

In the quantization-based watermarking framework, the perceptual just noticeable distortion (JND) model has been widely used to determine the quantization step size, as it can provide a better tradeoff between fidelity and robustness. However, the calculated JND values can vary due to changes introduced by watermark embedding. As a result, the mismatch problem will lead to watermark extraction errors in the absence of attacks. We present an improved spread transform dither modulation (STDM) watermarking scheme. Performance improvement with respect to the existing algorithm is obtained by a discrete cosine transform (DCT)-based perceptual JND model that is highly compatible with the STDM watermarking algorithm. The proposed scheme not only incorporates various masking effects of human visual perception, but also avoids the mismatch problem by utilizing a new measurement of the pixel intensity and edge strength. In contrast to conventional JND models, the proposed model can be theoretically invariant to the changes in the watermark-embedding processing, therefore, more fit for quantization-based watermarking. Experimental results confirm the improved robustness performance of the JND model in the STDM watermarking framework. Simulation results show that the proposed scheme is more robust than the existing JND model-based watermarking algorithms with uniform fidelity. Furthermore, our proposed scheme has a superior performance compared with previously proposed perceptual STDM schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call