Abstract

In this paper, we propose an end-to-end system for unsupervised text style transfer (UTST). Prior studies on UTST work on the principle of disentanglement between style and content features, which successfully accomplishes the task of generating style-transferred text. The success of a style transfer system depends on three criteria, viz. Style transfer accuracy, Content preservation of source, and Fluency of the generated text. Generated text by disentanglement-based method achieves better style transfer performance but suffers from the lack of content preservation as the previous works suggest. To develop an all-around solution to all three aspects, we use a reinforcement learning-based training objective that gives rewards to the model for generating fluent style transferred text while preserving the source content. On the modeling aspect, we develop a shared encoder and style-specific decoder architecture which uses the Transformer architecture as a backbone. This modeling choice enables us to frame a differentiable back translation objective aiding better content preservation as shown through a careful ablation study. We conclude this paper with both automatic and human evaluation, showing the superiority of our proposed method on sentiment and formality style transfer tasks. Code is available at https://github.com/newcodevelop/Unsupervised-TST .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call