Abstract
Words have different meanings (i.e., senses) depending on the context. Disambiguating the correct sense is important and a challenging task for natural language processing. An intuitive way is to select the highest similarity between the context and sense definitions provided by a large lexical database of English, WordNet. In this database, nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms interlinked through conceptual semantics and lexicon relations. Traditional unsupervised approaches compute similarity by counting overlapping words between the context and sense definitions which must match exactly. Similarity should compute based on how words are related rather than overlapping by representing the context and sense definitions on a vector space model and analyzing distributional semantic relationships among them using latent semantic analysis (LSA). When a corpus of text becomes more massive, LSA consumes much more memory and is not flexible to train a huge corpus of text. A word-embedding approach has an advantage in this issue. Word2vec is a popular word-embedding approach that represents words on a fix-sized vector space model through either the skip-gram or continuous bag-of-words (CBOW) model. Word2vec is also effectively capturing semantic and syntactic word similarities from a huge corpus of text better than LSA. Our method used Word2vec to construct a context sentence vector, and sense definition vectors then give each word sense a score using cosine similarity to compute the similarity between those sentence vectors. The sense definition also expanded with sense relations retrieved from WordNet. If the score is not higher than a specific threshold, the score will be combined with the probability of that sense distribution learned from a large sense-tagged corpus, SEMCOR. The possible answer senses can be obtained from high scores. Our method shows that the result (50.9% or 48.7% without the probability of sense distribution) is higher than the baselines (i.e., original, simplified, adapted and LSA Lesk) and outperforms many unsupervised systems participating in the SENSEVAL-3 English lexical sample task.
Highlights
Word sense disambiguation (WSD) is an important and challenging task for natural language processing (NLP) applications like in machine translation, information retrieval, question answering, speech synthesis, sentiment analysis, etc
The results show that our method (50.9% or 48.7% without the probability of sense distribution) is higher than the baselines and outperforms many unsupervised systems participating in the SENSEVAL-3 English lexical sample task [15]
Instead of using an overlap-based approach, we constructed sentence vectors from a popular word-embedding approach, Word2vec, which is capable of effectively capturing semantic and syntactic similarities better than the latent semantic analysis (LSA) approach when dealing with a huge corpus of text and using less memory
Summary
Word sense disambiguation (WSD) is an important and challenging task for natural language processing (NLP) applications like in machine translation, information retrieval, question answering, speech synthesis, sentiment analysis, etc. An all-words task tries to disambiguate all words in the entire text, which is difficult to provide in an all-words training corpus This leads to an unsupervised approach which discovers underlying semantic and syntactic relations in the large text corpus and works with a dictionary/lexicon to determine the correct sense [2,3,4,5,6]. The unsupervised approach has the advantage of no needing a huge annotated corpus being more practical in the real-world applications This can be done with the help of a large, well-organized public lexicon known as WordNet [8], which provides a set of cognitive synonyms (or synsets) for each word categorized by part-of-speech tag where each synset comes with a definition (or gloss), examples of usage, and its relationship to other words. Future Internet 2019, 11, 114 method, Section 4 describes experiments and results, Section 5 briefly illustrates an example of WSD applications, and Section 6 outlines the conclusion and future works
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have