Abstract

Knowledge facts are typically represented by relational triples, while we observe that some commonsense facts are represented by triples whose forms are inconsistent with the corresponding language expressions. For commonsense mining tasks, this inconsistency raises a challenge for the prevailing methods using pre-trained language models that learn the expression of language. However, there are few studies which focus on this inconsistency issue. To fill this empty, in this paper, we term the commonsense knowledge whose triple form is heavily inconsistent with the language expression as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">deep commonsense knowledge</i> and first conduct extensive exploratory experiments to study deep commonsense knowledge. We show that deep commonsense knowledge occupies a significant part of commonsense knowledge, while the conventional methods based on pre-trained language models fail to capture it effectively. We further propose a novel method to mine the deep commonsense knowledge from raw text that is exactly language expression, alleviating the reliance of conventional methods on the triple representation form. Experiments demonstrate that our proposed method substantially improves the performance in mining deep commonsense knowledge.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.