Abstract

Paraphrase generation has consistently been a challenging area in the field of NLP. Despite the considerable achievements made by previous work, existing methods lack a flexible way to include multiple controllable attributes to enhance the diversity of paraphrased sentences. To overcome this challenge, we propose a Successively Conditional Transformer (SECT) to tackle this task. SECT is based on a combination of Conditional Variational AutoEncoder (CVAE) and Transformer framework to generate diversified words. More specifically, our SECT deploys multi-head attention and memory gate mechanism to keep the interaction between each of the attributes and the corresponding encoder layer hidden state. To address the problem of absorbing flexible attributes, we apply a successive structure to our SECT, which enables the framework to couple the CVAE latent variables with the encoder layer hidden states progressively. In addition, our SECT is trained by minimizing a tailor-designed loss for producing paraphrased sentences as required. Finally, we conduct extensive experiments to substantiate the validity and effectiveness of our proposed model. The results show that SECT significantly outperforms the existing state-of-the-art approaches and generates more diverse paraphrased sentences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call