Abstract

Commonsense knowledge acquisition and reasoning have long been a core artificial intelligence problem. However, in the past, there has been a lack of scalable methods to collect commonsense knowledge. In this paper, we propose to develop principles for collecting commonsense knowledge based on selectional preference, which is a common phenomenon in human languages that has been shown to be related to semantics. We generalize the definition of selectional preference from one-hop linguistic syntactic relations to higher-order relations over linguistic graphs. Unlike previous commonsense knowledge definitions (e.g., ConceptNet), the selectional preference (SP) knowledge only relies on statistical distributions over linguistic graphs, which can be efficiently and accurately acquired from the unlabeled corpora with modern tools, rather than human-defined relations. As a result, acquiring SP knowledge is a much more scalable way of acquiring commonsense knowledge. Following this principle, we develop a large-scale eventuality (a linguistic term covering activity, state, and event)-based knowledge graph ASER, where each eventuality is represented as a dependency graph, and the relation between them is a discourse relation defined in shallow discourse parsing. The higher-order selectional preference over collected linguistic graphs reflects various kinds of commonsense knowledge. For example, dogs are more likely to bark than cats as the eventuality “dog barks” appears 14,998 times in ASER while “cat barks” only appears 6 times. “Be hungry” is more likely to be the reason rather than result of “eat food” as the edge 〈“be hungry,” Cause, “eat food”〉 appears in ASER while 〈“eat food,” Cause, “be hungry”〉 does not. Moreover, motivated by the observation that humans understand events by abstracting the observed events to a higher level and can thus transfer their knowledge to new events, we propose a conceptualization module on top of the collected knowledge to significantly boost the coverage of ASER. In total, ASER contains 648 million edges between 438 million eventualities. After conceptualization with Probase, a selectional preference based concept-instance relational knowledge base, our concept graph contains 15 million conceptualized eventualities and 224 million edges between them. Detailed analysis is provided to demonstrate its quality. All the collected data, APIs, and tools that can help convert collected SP knowledge into the format of ConceptNet are available at https://github.com/HKUST-KnowComp/ASER.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.