Abstract

Compared with traditional image encryption algorithms that generate noise-like images, visually meaningful image encryption achieves the dual protection of digital images in content and vision, but the embedded position is often fixed or controlled by keys. If the embedding position is not appropriate, the effect of encryption and decryption will be affected to a certain extent. In this paper, a novel visually meaningful image encryption and adaptive embedding scheme is proposed by using chaotic cellular neural network (CCNN), parallel compressive sensing (PCS), and least significant bit (LSB) embedding in transform domain. First, 2D discrete wavelet transform (DWT) is used to sparse the plain image. Then, the sparse matrix after threshold processing is encrypted and measured by local binary pattern (LBP) and PCS technology. Finally, the information entropy is used to analyze the texture degree of the carrier image for adaptive embedding, so as to obtain the visually meaningful cipher image. Moreover, the ability of the algorithm to resist known- and chosen- plaintext attacks is improved by using the plaintext eigenvalue as part of the encryption key stream. Considering the practicability of the scheme, the plaintext eigenvalue is hidden in the visually meaningful cipher image, which reduces unnecessary key transmission. Experimental results show that the scheme is effective on the premise of visual security and decryption quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call