Abstract

Abstract Word embedding generation is the task of distributed representation of words on vector space. The word embeddings captures both the syntactic and semantic information to perform natural language processing tasks. The word embeddings has attained considerable attention from last few years giving their inherent applicability to Natural Language Processing like part of speech tagging, sentiment analyzing or dependency parsing etc. despite recent advancement in word embedding mappings, very little research has devoted for Urdu word embeddings where word with similar meaning will align to similar vector space and different far apart. In this research paper Word2vec model has been used for Urdu word embedding generation. The model will help to make the dense word vector representation of Urdu words which could be used for pre-trained word vectors. The results showed the proposed method can be used to improve conventional word embedding methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.