Abstract

With the rapid proliferation of urbanization, massive data in social networks are collected and aggregated in real time, making it possible for criminals to use images as a cover to spread secret information on the Internet. How to determine whether these images contain secret information is a huge challenge for multimedia computing security. The steganalysis method based on deep learning can effectively judge whether the pictures transmitted on the Internet in urban scenes contain secret information, which is of great significance to safeguarding national and social security. Image steganalysis based on deep learning has powerful learning ability and classification ability, and its detection accuracy of steganography images has surpassed that of traditional steganalysis based on manual feature extraction. In recent years, it has become a hot topic of the information hiding technology. However, the detection accuracy of existing deep learning based steganalysis methods still needs to be improved, especially when detecting arbitrary-size and multi-source images, their detection efficientness is easily affected by cover mismatch. In this manuscript, we propose a steganalysis method based on Inverse Residuals structured Siamese network (abbreviated as SiaIRNet method, Sia mese- I nverted- R esiduals- Net work Based method). The SiaIRNet method uses a siamese convolutional neural network (CNN) to obtain the residual features of subgraphs, including three stages of preprocessing, feature extraction, and classification. Firstly, a preprocessing layer with high-pass filters combined with depth-wise separable convolution is designed to more accurately capture the correlation of residuals between feature channels, which can help capture rich and effective residual features. Then, a feature extraction layer based on the Inverse Residuals structure is proposed, which improves the ability of the model to obtain residual features by expanding channels and reusing features. Finally, a fully connected layer is used to classify the cover image and the stego image features. Utilizing three general datasets, BossBase-1.01, BOWS2, and ALASKA#2, as cover images, a large number of experiments are conducted comparing with the state-of-the-art steganalysis methods. The experimental results show that compared with the classical SID method and the latest SiaStegNet method, the detection accuracy of the proposed method for 15 arbitrary-size images is improved by 15.96% and 5.86% on average, respectively, which verifies the higher detection accuracy and better adaptability of the proposed method to multi-source and arbitrary-size images in urban scenes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.