Abstract
Because of the amount of computer-processable text material and processing hardware, deep learning architecture has been the focus of knowledge projects. The task of named entity recognition in natural language processing is no exception. Because of the volume of text material that computers can process and their processing capability, knowledge projects are now focusing on deep learning architecture. Named entity recognition, is a frequently occurring task in natural language processing. Deep learning models, particularly recurrent neural net- works (RNNs) and their variations such as gated recurrent units (GRUs) and long short-term memory (LSTM) , have revolutionised natural language pro- cessing (NER) by capturing intricate contextual connections. These models can accurately forecast the borders and types of named entities because they can effectively capture the sequential and hierarchical patterns inside text. However, using deep learning in NER is a challenging task in itself. Annotated training data is critical for building reliable models, but getting labelled data for all item kinds and domains can be time-consuming. To solve this difficulty, transfer learn- ing and domain adaptation strategies have arisen, using pre-trained models and adapting them to various domains or target tasks. This paper looks at recent deep learning methods for NER and how they evolved from older linear learning approaches. It also analyses the status of activities that are either upstream or downstream of NER, such as sequence tagging and entity linking, among other things
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have