Abstract

The large language models based on transformers have shown strong text generation ability. However, due to the need for significant computing resources, little work has been done to generate emotional text using language models such as GPT-2. To address this issue, the authors proposed an affective prompt-tuning-based language model (APT-LM) equipped with an affective decoding (AD) method, aiming to enhance emotional text generation with limited computing resources. In detail, the proposed model incorporates the emotional attributes into the soft prompt by using the NRC emotion intensity lexicon and updates the additional parameters while freezing the language model. Then, it steers the generation toward a given emotion by calculating the cosine distance between the affective soft prompt and the candidate tokens generated by the language model. Experimental results show that the proposed APT-LM model significantly improves emotional text generation and achieves competitive performance on sentence fluency compared to baseline models across automatic evaluation and human evaluation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.