Abstract

Question generation is helpful for understanding reading comprehension, spontaneous questioning in chatting systems, and expanding datasets for answering questions. In previous studies, many models have been used to generate questions from contexts, but none was suitable in large-length contexts. To overcome this challenge, we generated questions from an intermediate representation of a context, such as knowledge graphs. In this study, we focused on developing questions using knowledge graphs with the T5 language model. We used the language model to create questions using the knowledge graph and mask the self-attention of the encoder to train the model by explicitly preserving the graph’s structure. As a result of the automatic evaluation, the T5 language model with and without mask was comparable with the bidirectional Graph2Seq model (G2S), known as the QG model, using knowledge graphs. More-over, the masked language model was slightly better than the non-masked model in t5-small on four benchmarks. The code and data are publicly available at https://github.com/Macho000/T5-for-KGQG.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call