Abstract
Natural language processing techniques often aim at automatically extracting semantics from texts. However, they usually need some available semantic knowledge contained in dictionaries and resources such as WordNet, Wikipedia, and FrameNet. In this respect, there is a large literature about the creation of novel semantic resources as well as attempts to integrate existing ones. In this context, we here focus on common-sense knowledge, which shows to have interesting characteristics as well as challenging issues such as ambiguity, vagueness, and inconsistency. In this paper, we make use of a large-scale and crowdsourced common-sense knowledge base, i.e., ConceptNet, to qualitatively evaluate its role in the perception of semantic association among words. We then propose an unsupervised method to disambiguate and integrate ConceptNet instances into WordNet, demonstrating how the enriched resource improves the recognition of semantic association. Finally, we describe a novel approach to label semantically associated words by exploiting the functional and behavioral information usually contained in common sense, demonstrating how this enhances the explanation (and the use) of relatedness and similarity with non-numeric information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.