Abstract

Distributional semantics based on neural approaches is a cornerstone of Natural Language Processing, with surprising connections to human meaning representation as well. Recent Transformer-based Language Models have proven capable of producing contextual word representations that reliably convey sense-specific information, simply as a product of self-supervision. Prior work has shown that these contextual representations can be used to accurately represent large sense inventories as sense embeddings, to the extent that a distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms models trained specifically for the task. Still, there remains much to understand on how to use these Neural Language Models (NLMs) to produce sense embeddings that can better harness each NLM's meaning representation abilities. In this work we introduce a more principled approach to leverage information from all layers of NLMs, informed by a probing analysis on 14 NLM variants. We also emphasize the versatility of these sense embeddings in contrast to task-specific models, applying them on several sense-related tasks, besides WSD, while demonstrating improved performance using our proposed approach over prior work focused on sense embeddings. Finally, we discuss unexpected findings regarding layer and model performance variations, and potential applications for downstream tasks.

Highlights

  • Lexical ambiguity is prevalent across different languages and plays an important role in improving communication efficiency (Piantadosi et al, 2012)

  • In this comparison we do not consider LMMS2348 because those sense embeddings are concatenated with fastText static embeddings, resulting in 300 dimensions having the same exact distribution for sense embeddings corresponding to identical lemmas

  • The initial baseline methods proposed with WiC were based on cosine similarity with thresholds learned from the validation set

Read more

Summary

Introduction

Lexical ambiguity is prevalent across different languages and plays an important role in improving communication efficiency (Piantadosi et al, 2012). Word Sense Disambiguation (WSD) is a long-standing challenge in the field of Natural Language Processing (NLP), and Artificial Intelligence more generally, with an extended history of research in computational linguistics (Navigli, 2009). Both computational and psychological accounts of meaning representation have converged on high-dimensional vectors within semantic spaces. There is a rich line of work on learning word embeddings based on statistical regularities from unlabeled corpora, following the well-established Distributional Hypothesis (Harris, 1954; Firth, 1957, DH). The development and improvement of word embeddings has been a major contributor to the progress of NLP in the last decade (Goldberg, 2017)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call