Abstract

In today’s world where data plays the very important role, we have various sources of pre-data like online books, equation analysis, encyclopedia, common-sense reasoning, common-sense knowledge, etc. The increasing capacity of pre-training language models have given knowledge intensive natural language processing (KI-NLP) a new boost for advanced functionalities for establishing a stable, flexible, robust and efficient model. Though pre-trained models have its own drawback for handling the KI-NLP tasks, we are here to discuss the challenges faced in this field. A wide variety of pre-trained language models enhanced with external knowledge sources have been proposed and are in rapid development to meet this difficulty. In this research we have also discusses the challenges in NLP in terms of generation of knowledge intensive models. We have also defined some mathematical model and its framework dependability for pre-training different language in NLP. Finally, we have also discussed about variety of literature reviews based on we intend to describe the present progress of pre-trained language model-based knowledge-enhanced models (PLMKEs) in this work by deconstructing their three key elements: information sources, knowledge-intensive NLP tasks, and knowledge fusion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call