Abstract

Post-training is known to be effective for boosting the performance of a pre-trained language model. However, in the task of question generation, question generators post-trained with a well-designed training objective show poor performance without sufficient training examples. To handle this problem, this paper proposes a novel post-training for question generation which adopts a data augmentation technique to increase the number of training examples as well as post-training objectives. As post-training objectives, this paper introduces a new training objective, wh-words deletion, in addition to the well-known question infilling. Moreover, this paper employs back-translation techniques to increase the number of instances for post-training. To prove the effectiveness of the proposed method, this paper applies the post-training strategies to T5, a large-scale pre-trained language model, on SQuAD-QG. The experimental results demonstrate that the proposed post-training is helpful for enhancing the performance of answer-aware question generation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.