Abstract

In the field of artificial intelligence and natural language processing (NLP), natural language generation (NLG) has significantly advanced. Its primary aim is to automatically generate text in a manner resembling human language. Traditional text generation has mainly focused on binary style transfers, limiting the scope to simple transformations between positive and negative tones or between modern and ancient styles. However, accommodating style diversity in real scenarios presents greater complexity and demand. Existing methods usually fail to capture the richness of diverse styles, hindering their utility in practical applications. To address these limitations, we propose a multi-class conditioned text generation model. We overcome previous constraints by utilizing a transformer-based decoder equipped with adversarial networks and style-attention mechanisms to model various styles in multi-class text. According to our experimental results, the proposed model achieved better performance compared to the alternatives on multi-class text generation tasks in terms of diversity while it preserves fluency. We expect that our study will help researchers not only train their models but also build simulated multi-class text datasets for further research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.