Abstract

Commonsense knowledge graph support applications in commonsense reasoning, question answering, and so on. However, automatic knowledge graph construction is still a continuing goal for AI researchers due to the difficulty of obtaining tractable and objective commonsense information. Besides, the relative researches have so far been mainly limited to English, making it slow to develop the research of commonsense knowledge in other languages. Previous studies constructed the knowledge bases as the relational schemas which use the expert knowledge, semi-structured text extraction and unstructured text extraction. However, with the way of extraction, these methods can only capture the explicit knowledge mentioned in the text, while the commonsense knowledge in the text is usually implicit. In this paper, we propose a commonsense generative model with a novel attention mechanism and discuss whether pre-trained language models can effectively learn and generate novel knowledge. The empirical results show that our model could generate correct commonsense knowledge with high scores which up to 50.10% precision on ATOMIC dataset humans given.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call