Various cybercrimes can be prevented by text authentication that is responsible for preserving digital identities and contents. Digital signatures come in handy as a way of authenticating texts, which is an extensively used method. One approach to this problem is linguistic steganography, which allows hiding the signature in other words within the text and thereby facilitating efficient data management. However, it should be noted that there is a danger that these kinds of changes may result in inappropriate decisions being taken by automated computing systems not to mention change their final outputs (unseen). As such, many people are becoming more concerned with the possibility of reversing steganography so that it becomes possible to eliminate any distortions made during the process. This paper uses Contextual masking instead of masking randomly with BERT model. The goal behind this research was developing a natural language text specific Reversible Steganographic System. Our model uses pre-trained BERT as a transformer based masked language model and reversibly embeds messages through predictive word substitution. To quantify predictive uncertainty, we introduce an adaptive steganographic technique using Bayesian deep learning. This experiment shows us how our proposed system balances imperceptibility with capacity while maintaining near semantics at all times. Also, we integrate ensemble methods instead of Monte Carlo to balance the imperceptibility.
Read full abstract