Abstract

Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this paper, we investigate the problem of generating natural language summaries for Semantic Web data using neural networks. Our end-to-end trainable architecture encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. We explore a set of different approaches that enable our models to verbalise entities from the input set of triples in the generated text. Our systems are trained and evaluated on two corpora of loosely aligned Wikipedia snippets with triples from DBpedia and Wikidata, with promising results.

Highlights

  • Since sets of triples that are given to our systems as an input are unordered, and not sequentially correlated, we propose a model that consists of a 170 feed-forward neural network that encodes each triple from

  • Triples that have similar semantic meaning will have similar positions. We couple this novel encoder with an RNN-based decoder that 175 generates the textual summary one token at a time

  • 180 encoder-decoder framework to generate the first sentence t=1 of a Wikipedia biography [13, 34]

Read more

Summary

Introduction

Research has mostly focused on adapting rulebased approaches to generate text from Semantic Web. While Semantic Web data, such as triples in Resource data. While Semantic Web data, such as triples in Resource data These systems work in domains with small vocabu-. On the con- the difficulty of transferring the involved rules trary, for humans, reading text is a much more accessible across different domains or languages along with the teactivity. In the context of the Semantic Web, Natural dious repetition of their textual patterns prevented them. Language Generation (NLG) is concerned with the implefrom becoming widely accepted [4]. Mentation of textual interfaces that would make the infor-. We address these limitations by proposing a statistical. Further development of how an adaptation of the encoder-decoder framework [5, 6]

Objectives
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call