Abstract

Natural language processing is referred to as NLP that applies computational techniques for inter-communication between human and computer through human natural language on the basis of computer science, computational linguistic and artificial intelligence. The progression of NLP in different revolutionary techniques, word embedding has brought magnificent changes in the field of computational linguistic, statistical inference and so on. Semantic clustering can be interpreted as classify the group of identical objects that are semantically analogous. The main focus of the work is to manifest different word embedding techniques for semantic clustering of natural Bangla words. Earlier N-gram models were applied for the relevant field but dynamic word clustering models are currently popular due to the advancement of deep learning techniques because they speed up memory retrieval and decrease processing time. We discuss the effectiveness of Word2Vec, TF-IDF, FastText and GloVe word embedding models in this work and appraise the performance based on the models accuracy and competence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call