Abstract

In this paper, we propose a simple yet powerful text generation model, called diversity regularized autoencoders (DRAE). The key novelty of the proposed model lies in its ability to handle various sentence modifications such as insertions, deletions, substitutions, and maskings, and to take them as input. Because the noise-injection strategy enables an encoder to make the latent distribution smooth and continuous, the proposed model can generate more diverse and coherent sentences. Also, we adopt the Wasserstein generative adversarial networks with a gradient penalty to achieve stable adversarial training of the prior distribution. We evaluate the proposed model using quantitative, qualitative, and human evaluations on two public datasets. Experimental results demonstrate that our model using a noise-injection strategy produces more natural and diverse sentences than several baseline models. Furthermore, it is found that our model shows the synergistic effect of grammar correction and paraphrase generation in an unsupervised way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call