Abstract

Steganography, especially in the form of text generation based on secret messages, has become a current research topic. It is more difficult to identify the hidden message when it embedded directly into the text without using a cover text, and it also has a higher embedding capacity. Owing to the high rate of imperceptibility and resistance to steganalysis of this type steganography, it is essential that steganalysis methods, generate better performance. Although the complexity of deep learning models increases the accuracy rate, it also increases the inference time. In this study, a linguistic steganalysis was performed with a lower inference time and a higher accuracy rate. In the developed model, first, differences between non-stega and steganographic texts were modelled by a finetuned Bert using the custom dataset. The disparity information obtained by fine-tuned model was distilled into 3 separate networks, BertGCN, BertGAT and BertGIN, for faster and more accurate inference. Then, these 3 distilled networks were combined through Transfer Learning to form a new model. Experiments demonstrates that the proposed model surpass other methods in terms of the accuracy (a success of 0.9879 at 3.22 bpw on text encoded through SAAC Encoding) and the effectiveness of inference (1.09 second).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.