Abstract

The lexical ambiguity of words has been successfully clarified by representing words at a sense level instead of a word level. This is known as word sense representation (WSR). However, WSR models are typically trained in an unsupervised fashion without any guidance from sense inventories. Therefore, the number of sense vectors assigned to a word varies from model to model. This implies that some senses are missed or unnecessarily added. Moreover, to utilize their sense vectors in natural language processing tasks, we must determine which sense of a word to choose. In this paper, we introduce a unified neural model that incorporates WSR into word sense disambiguation (WSD), thereby leveraging the sense inventory. We use bidirectional long short-term memory networks to capture the sequential information of contexts effectively. To overcome the limitation of size with the labeled dataset, we train our model in a semi-supervised fashion to scale up the size of the dataset by leveraging a large-scale unlabeled dataset. We evaluate our proposed model on both WSR and WSD tasks. The experimental results demonstrate that our model outperforms state-of-the-art on WSR task by 0.27%, while, on WSD task, by 1.4% in terms of Spearman's correlation and F'l-score, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call