Abstract

In the area of natural language processing, measuring sentence similarity is an essential problem. Searching for semantic meaning in natural language is a related issue. The task of measuring sentence similarity is to find semantic symmetry in two sentences, not matter how they are arranged. It is important to measure the similarity of sentences accurately. To compute the similarity between sentences, existing methods have been constructed from approaches for large texts. Since these methods work in very high-dimensional spaces, they are inefficient, require human input, and are not flexible enough for some applications. In this study, we propose a hybrid method (HydMethod) which considers not only semantic information including lexical databases, word embeddings, and corpus statistics, but also implied word order information. With lexical databases, our method models human common sense knowledge, and that knowledge can then be adapted to be used in different domains with the incorporation of corpus statistics. Therefore, the methodology is applicable across several domains. As part of our experiments, we used two standard datasets - Pilot Short Text Semantic Similarity Benchmark and MS paraphrase - in order to demonstrate the efficacy of our proposed method. As a result, the proposed method outperforms the existing approaches when tested on these two datasets, giving the highest correlation value for both word and sentence similarity. Moreover, it achieves a maximum of 32% higher increase than only using word vector or WorldNet based methodology. With Rubenstein and Goodenough word & sentence pairs, our algorithm's similarity measure shows a high Pearson correlation coefficient of 0.8953.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call