Abstract

Word sense disambiguation (WSD) is a basic and persistent problem that has existed since its inception in the natural language processing (NLP) area. The process of determining the accurate meaning of a word within a specific context is referred to as word sense disambiguation, commonly known as WSD. In NLP, a single word can have two or more meanings, with each meaning being distinguished by its context. This is known as word polysemy. Its applications span a wide range of fields, such as question answering systems, machine translation, information retrieval (IR) etc. Ontology and NLP are still struggling with ambiguity. Homonyms, which are ubiquitous in most languages, are words that have the same spelling but a different meaning. This method's fundamental premise is to select the appropriate sense by comparing a word's context in a sentence to contexts generated from WordNet.  The primary goal of this study is to employ WordNet and the Lesk algorithm for WSD. After the algorithm was put into practice and tested on a collection of sentences that included ambiguous words, the synset was able to determine the proper interpretation for most of the sentences. The Lesk algorithm relies on finding the highest number of shared words (maximum overlap) between a word’s context, prepositions and the definitions for its different meanings(glosses). This approach helps in identifying the most accurate interpretation for a given word within a specific context. According to experimental findings, the suggested strategy considerably boosts performance while identifying homonyms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call