Abstract

Natural language processing is an application of a computational technique that allows the machine to process human language. One of the primary tasks of NLP is information extraction that aims to capture important information from the text. Nowadays, the fast-growing web contains a large amount of textual information, requires a technique to extract relevant information. The entity recognition task is a type of information extraction that attempts to find and classify named entities appearing in the unstructured text document. The traditional coarse-grained entity recognition systems often define a less number of pre-defined named entity categories such as person, location, organization, and date. The fine-grained entity type classification model focused to classify the target entities into fine-grained types. Most of the recent works are accomplished with the help of Bidirectional LSTM with an attention mechanism. But due to the complex structure of bidirectional LSTM, these models consume an enormous amount of time for the training process. The existing attention mechanisms are incapable to pick up the correlation between the new word and the previous context. The proposed system resolves this issue by utilizing bidirectional GRU with the self-attention mechanism. The experiment result shows that the novel approach outperforms state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call