Abstract

Continuous word representations have been remarkably useful across NLP tasks but remain poorly understood. We ground word embeddings in semantic spaces studied in the cognitive-psychometric literature, taking these spaces as the primary objects to recover. To this end, we relate log co-occurrences of words in large corpora to semantic similarity assessments and show that co-occurrences are indeed consistent with an Euclidean semantic space hypothesis. Framing word embedding as metric recovery of a semantic space unifies existing word embedding algorithms, ties them to manifold learning, and demonstrates that existing algorithms are consistent metric recovery methods given co-occurrence counts from random walks. Furthermore, we propose a simple, principled, direct metric recovery algorithm that performs on par with the state-of-the-art word embedding and manifold learning methods. Finally, we complement recent focus on analogies by constructing two new inductive reasoning datasets—series completion and classification—and demonstrate that word embeddings can be used to solve them as well.

Highlights

  • Continuous space models of words, objects, and signals have become ubiquitous tools for learning rich representations of data, from natural language processing to computer vision

  • We show that pointwise mutual information (PMI) relates linearly to human similarity assessments, and that nearest-neighbor statistics are consistent with an Euclidean space hypothesis (Sections 2 and 3)

  • Solving analogies using survey data alone: We demonstrate that, surprisingly, word embeddings trained directly on semantic similarity derived from survey data can solve analogy tasks

Read more

Summary

Introduction

Continuous space models of words, objects, and signals have become ubiquitous tools for learning rich representations of data, from natural language processing to computer vision. There has been particular interest in word embeddings, largely due to their intriguing semantic properties (Mikolov et al, 2013b) and their success as features for downstream natural language processing tasks, such as named entity recognition (Turian et al, 2010) and parsing (Socher et al, 2013). The empirical success of word embeddings has prompted a parallel body of work that seeks to better understand their properties, associated estimation algorithms, and explore possible revisions. Semantic spaces are vector spaces over concepts where Euclidean distances between points are assumed to indicate semantic similarities We link such semantic spaces to word co-occurrences through semantic similarity assessments, and demonstrate that the observed cooccurrence counts possess statistical properties that are consistent with an underlying Euclidean space where distances are linked to semantic similarity.

I D3 I D1 D2 ID1 D2 D1 D2
Word vectors and semantic spaces
The semantic space of log co-occurrences
Semantic spaces and manifolds
Random walk model
Connection to manifold learning
Word embeddings as metric recovery
Metric regression from log co-occurrences
Empirical validation
Datasets
Method
Results on inductive reasoning tasks
Word embeddings can embed manifolds
Discussion
A Metric recovery from Markov processes on graphs and manifolds
B Consistency proofs for word embedding
Implementation details
Solving inductive reasoning tasks The ideal point for a task is defined below:

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.