Abstract

Measuring the semantic similarity between phrases and sentences is an important task in natural language processing (NLP) and information retrieval (IR). We compare the quality of the distributional semantic NLP models against phrase-based semantic IR. The evaluation is based on the correlation between human judgements and model scores on a distributional phrase similarity task. We experiment with four NLP and two IR model variants. On the NLP side, models vary over normalization schemes and composition operators. On the IR side, models vary with respect to estimation of the probability of a term being in a document, namely P(t|d) where only term co-occurrence information is used and P(t|d, sim) which incorporates term distributional similarity. A mixture of the two methods is presented and evaluated. For both methods, word meanings are derived from large corpora of data: the BNC and ukWaC. One of the main findings is that grammatical distributional models give better scores than the IR models. This suggests that an IR model enriched with distributional linguistic information performs better in the long standing problem in IR of document retrieval where there is no direct symbolic relationship between query and document concepts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call