Abstract

Sparse representations of text such as bag-ofwords models or extended explicit semantic analysis (ESA) representations are commonly used in many NLP applications. However, for short texts, the similarity between two such sparse vectors is not accurate due to the small term overlap. While there have been multiple proposals for dense representations of words, measuring similarity between short texts (sentences, snippets, paragraphs) requires combining these token level similarities. In this paper, we propose to combine ESA representations and word2vec representations as a way to generate denser representations and, consequently, a better similarity measure between short texts. We study three densification mechanisms that involve aligning sparse representation via many-to-many, many-to-one, and oneto-one mappings. We then show the effectiveness of these mechanisms on measuring similarity between short texts.

Highlights

  • IntroductionBag-of-words model has been used for many applications as the state-of-the-art method for tasks such as document classifications and information retrieval

  • Bag-of-words model has been used for many applications as the state-of-the-art method for tasks such as document classifications and information retrieval. It represents each text as a bag-of-words, and computes the similarity, e.g., cosine value, between two sparse vectors in the high-dimensional space

  • Instead of using only the words in a document, explicit semantic analysis (ESA) uses a bag-of-concepts retrieved from Wikipedia to represent the text

Read more

Summary

Introduction

Bag-of-words model has been used for many applications as the state-of-the-art method for tasks such as document classifications and information retrieval It represents each text as a bag-of-words, and computes the similarity, e.g., cosine value, between two sparse vectors in the high-dimensional space. The similarity between two texts can be computed in this enriched concept space Both bag-of-words and bag-of-concepts models suffer from the sparsity problem. Because both models use sparse vectors to represent text, when comparing two pieces of texts, the similarity can be zero even when the text snippets are highly related, but make use of different vocabulary. ESA, despite augmenting the lexical space with relevant Wikipedia concepts, still suffers from the sparsity problem We illustrate this problem with the following simple experiment, done by choosing a documents from the “rec.autos” group in the 20-newsgroups data set. We verify the superiority of the proposed methods using three different NLP tasks

Sparse Vector Densification
Similarity Augmentation
Term Similarity Measure
Experiments
Dataless Classification
Document Similarity
Event Classification
Related Work
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call