Abstract

Text generation is a challenging task for intelligent agents. Numerous research attempts have investigated the use of adversarial networks with word sequence-based generators. However, these approaches suffer from an unbalance between generator and discriminator causing overfitting due to the strength that the discriminator acquires by getting too precise in distinguishing what the generator is producing and what instead comes from the real dataset. In this paper, we investigate how to balance both generator and discriminator of a sequence-based text adversarial network exploiting: i) the contribution of global knowledge in the input of the adversarial network encoded by global word embeddings that are adapted to the context of the datasets in which they are utilized, and ii) the use of a self-attentive discriminator that slowly minimizes its loss function and thus enables the generator to get valuable feedback during the training process. Through an extensive evaluation on three datasets of short-, medium- and long-length text documents, the results computed using word-overlapping metrics show that our model outperforms four baselines. We also discuss the results of our model using readability metrics and the human perceived quality of the generated documents.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.