Abstract

Semantic similarity measures between linguistic terms are essential in many Natural Language Processing (NLP) applications. Term similarity is most conventionally perceived as a symmetric relation. However, semantic directional (asymmetric) relations exist in lexical semantics and make symmetric similarity measures less suitable for their identification. Furthermore, directional similarity actually represents even more general conditions and is more practical in some specific NLP applications than symmetric similarity. As the footstone of similarity measures, current semantic features cannot efficiently represent large scale web text collections. Hence, we propose a new directional similarity method, considering feature representations both in linguistic and extra linguistic dimensions. We evaluate our approach on standard word similarity, reporting state-of-the-art performance on multiple datasets. Experiments show that our directional method handles both symmetric and directional semantic relations and leads to clear improvements in entity search and query expansion.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.