Abstract

Visual saliency (VS) is an important mechanism for defining which areas of an image will attract more attention of the HVS. Thus, VS can be employed to weight the just noticeable difference (JND) with different attention levels. Some VS-based JND profiles have been proposed in the DCT domain, which used the bottom-up features, such as luminance and texture only. Recently, the research about saliency detection has shown that a better saliency model considering both bottom-up features and top-down features will lead to a significant improvement for the overall saliency detection performance. In this paper, we propose a novel two-layer VS-induced JND profile, which is composed of the bottom-up features and the top-down feature extracted from DCT blocks in the transformed domain. In this model, the luminance and texture features are adapted to calculate the bottom-up features maps, while the top-down feature of focus is used to guide the generation of final salient regions since the camerapersons are used to facilitate the attention regions in focus. The proposed two-layer saliency-induced JND model is further applied to modulate the quantization step in the watermarking framework, which can make full use of its individual merits to achieve a better tradeoff between fidelity and robustness. The experimental results show that the proposed scheme has superior performance than the previous watermarking schemes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.