Abstract

The current steganalysis researches have not specifically explored the compressed speech stream with 1%–9% embedding rates. In this paper, we propose a common steganalysis method of low embedding rate steganography based on hierarchy feature extraction and fusion. Firstly, codewords in each frame are converted to a multi-hot vector. And each multi-hot vector will be mapped into a fixed length embedding vector to get a more compact representation by utilizing the pre-trained dictionaries. Then, a hierarchy feature extraction and fusion framework is employed to perform the extraction and fusion of different levels of correlation features. Specifically, a 5-layer convolutional neural network is used to extract correlation feature information from local to global. The features of different local scales are restored to the same size by the transposed convolution. In addition, the attention mechanism is introduced in different layers of the network to assign different importance weights to the output feature within each layer. Finally, the prediction results can be generated by the fully connected layer. Experimental results show that our method performs better than the existing steganalysis methods for detecting multiple steganographies in the low bit-rate compressed speech streams. On the mixed dataset of multiple steganography methods, the proposed method can reach 73.56% on the speech stream under 5% embedding rate. And the accuracy can exceed 83% on the dataset under 9% embedding rate.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.