Abstract

Natural language processing includes a subfield called word sense disambiguation, which focuses mostly on words that might have several meanings. Polysemous terms are also referred to as confusing phrases in some circles. The performance of word sense disambiguation depends on how effectively the ambiguous word is recognized by the machine. The discussed word embedding model for the ambiguous words represents the words from the document space to vector space with no data loss. The most identified challenge of ambiguous word representation is the features. The selection and representation of ambiguous words with respect to the features is the tedious task of word embedding. The discussed word embedding model uses countable features of available context for disambiguation. The proposed model is implemented for ambiguous words with context information. The available context of ambiguous/polysemous words is used for disambiguation. The unavailability of the context is the challenge in this model. The Recurrent Neural Network with Large Small Term Memory is used for the classification. The output of the RNN-LSTM is the sense values which are further mapped with the freely available lexical resource WordNet for retrieving the correct sense(meaning).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.