Abstract

Generating natural language descriptions for the structured tables which consist of multiple attribute-value tuples is a convenient way to help people to understand the tables. Most neural table-to-text models are based on the encoder-decoder framework. However, it is hard for a vanilla encoder to learn the accurate semantic representation of a complex table. The challenges are two-fold: firstly, the table-to-text datasets often contain large number of attributes across different domains, thus it is hard for the encoder to incorporate these heterogeneous resources. Secondly, the single encoder also has difficulties in modeling the complex attribute-value structure of the tables. To this end, we first propose a two-level hierarchical encoder with coarse-to-fine attention to handle the attribute-value structure of the tables. Furthermore, to capture the accurate semantic representations of the tables, we propose 3 joint tasks apart from the prime encoder-decoder learning, namely auxiliary sequence labeling task, text autoencoder and multi-labeling classification, as the auxiliary supervisions for the table encoder. We test our models on the widely used dataset WIKIBIO which contains Wikipedia infoboxes and related descriptions. The dataset contains complex tables as well as large number of attributes across different domains. We achieve the state-of-the-art performance on both automatic and human evaluation metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call