Abstract

Word embeddings obtained through neural language models developed recently can capture semantic and grammatical behaviors of words and very capably find relationships between words. Such word embeddings are shown to be effective for various NLP tasks. In this paper, we develop a supervised method for word sense disambiguation (WSD) that employs word embeddings as local context features. Our experiments show the usefulness of word embeddings in the WSD task. We also compare the methods with different vector representations and reveal their effects on the WSD task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call