Abstract

Natural language processing (NLP) is an area in artificial intelligence that deals with understanding, interpretation and development of human language for computers to carry out tasks such as sentiment analysis, summarization of text in a document, developing conversational agents, machine translation and speech recognition. From conversational agents called catboats deployed on various websites that interact with consumers digitally to understand the needs of the consumers to reading summarized content delivered through apps in smartphones, NLP has had some major achievements in transforming the digital world that is increasingly gearing towards artificial intelligence. One area that has seen remarkable growth in recent times is language modelling, a statistical technique to compute the probability of tokens or words in a given sentence. In this paper, we attempt to present an overview of various representations with respect to language modelling, from neural word embeddings such as Word2Vec and GloVe to deep contextualized pre-trained embedding such as ULMFit, ELMo, OpenAI GPT and BERT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call