Abstract

Natural language generation (NLG) models combined with increasingly mature and powerful deep learning techniques have been widely used in recent years. Deployed NLG models in practical applications may be stolen or used illegally, and watermarking has become an important tool to protect the Intellectual Property (IP) of these deep models. Watermarking technique designs algorithms to embed watermark information and extracts watermark information for IP identification of NLG models can be seen as a symmetric signal processing problem. In terms of IP protection of NLG models, however, the existing watermarking approaches cannot provide reliable and timely model protection and prevent illegal users from utilizing the original performance of the stolen models. In addition, the quality of watermarked text sequences generated by some watermarking approaches is not high. In view of these, this paper proposes two embedding schemes to the hidden memory state of the RNN to protect the IP of NLG models for different tasks. Besides, we add a language model loss to the model decoder to improve the grammatical correctness of the output text sequences. During the experiments, it is proved that our approach does not compromise the performance of the original NLG models on the corresponding datasets and outputs high-quality text sequences, while forged secret keys will generate unusable NLG models, thus defeating the purpose of model infringement. Besides, we also conduct sufficient experiments to prove that the proposed model has strong robustness under different attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call