Abstract

Natural language generation (NLG) tasks on pro-drop languages are known to suffer from zero pronoun (ZP) problems, and the problems remain challenging due to the scarcity of ZP-annotated NLG corpora. In this case, we propose a highly adaptive two-stage approach to couple context modeling with ZP recovering to mitigate the ZP problem in NLG tasks. Notably, we frame the recovery process in a task-supervised fashion where the ZP representation recovering capability is learned during the NLG task learning process, thus our method does not require NLG corpora annotated with ZPs. For system enhancement, we learn an adversarial bot to adjust our model outputs to alleviate the error propagation caused by mis-recovered ZPs. Experiments on three document-level NLG tasks, i.e., machine translation, question answering, and summarization, show that our approach can improve the performance to a great extent, and the improvement on pronoun translation is very impressive.

Highlights

  • Introduction corpora tailored forNLG tasks are scarcity, and existing zero pronoun (ZP) corpora are limited to certain domainsFor a long time, natural language generation (NLG) and tasks. (ii) Using pre-trained ZP resolution syshas attracted a lot of attention for its importance in tems to recover pronoun labels for NLG tasks could serving human life

  • Natural language generation (NLG) and tasks. (ii) Using pre-trained ZP resolution syshas attracted a lot of attention for its importance in tems to recover pronoun labels for NLG tasks could serving human life

  • Lots of attention has been paid to ZP resolution in the past decade

Read more

Summary

Introduction

Introduction corpora tailored forNLG tasks are scarcity, and existing ZP corpora are limited to certain domainsFor a long time, natural language generation (NLG) and tasks. (ii) Using pre-trained ZP resolution syshas attracted a lot of attention for its importance in tems to recover pronoun labels for NLG tasks could serving human life. (ii) Using pre-trained ZP resolution syshas attracted a lot of attention for its importance in tems to recover pronoun labels for NLG tasks could serving human life. Taking the Chinese TED corpus as an example, according to our statistics, in sentences with an average length of 18, each sentence will omit around 0.5 pronouns. Facing this problem, lots of attention has been paid to ZP resolution in the past decade We perform document context modeling for both tasksupervised ZP recovering and ZP-focused NLG

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.