Abstract

The current research on the topic of machine learning and especially the domain of natural language processing has gained much popularity in the modern era. One such framework for attaining NLP tasks is word embedding, which represents data as vectors, i.e., real numbers rather than words of natural language because neural networks do not understand them naturally. Word embeddings try to capture both syntactic and semantic information of words and capture relationships according to context and morphology. This paper reviews each word embedding technique available in the contemporary world ranging from traditional embeddings based on the frequency of terms to pre-trained embeddings like prediction-based embeddings. The goal of this paper is to present the myriad methods available for word embedding, classify their working patterns, also identify their pros and cons for working on text classification and detect their hegemony over the traditional methods of NLP.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.