Abstract

Transcribing structural data into readable text (data-to-text) is a fundamental language generation task. One of its challenges is to plan the input records for text realization. Recent works tackle this problem with a static planner, which performs record planning in advance for text realization. However, they cannot revise plans to cope with unexpected realized text and require golden plans for supervised training. To address these issues, we first propose a model that contains a dynamic planner. It decomposes text generation into two alternately procedures, record planning and text realization. We also devise a novel likelihood-driven training strategy for the planner. This strategy exploits sentence likelihood to select input records, which requires no annotated plan. Besides, we design a metric based on the set similarity to evaluate the quality of predicted plans. We conduct comprehensive experiments on two data-to-text datasets, E2E and EPW. Our best model considerably outperforming previous works on both text metrics and plan metrics. The likelihood-driven strategy also exhibits competitiveness for training the dynamic planner.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call