Abstract

Named entity recognition (NER) is one of the widely studied natural language processing tasks in recent years. Conventional solutions treat the NER as a sequence labeling problem, but these approaches cannot handle nested NER. This is due to the fact that nested NER refers to the case where one entity contains another entity and it is not feasible to tag each token with a single tag. The pyramid model stacks L flat NER layers for prediction, which subtly enumerates all spans with length less than or equal to L. However, the original model introduces a block consisting of a convolutional layer and a bidirectional long short-term memory (Bi-LSTM) layer as the decoder, which does not consider the dependency between adjacent inputs and the Bi-LSTM cannot perform parallel computation on sequential inputs. For the purpose of improving performance and reducing the forward computation, we propose a Multi-Head Adjacent Attention-based Pyramid Layered model. In addition, when constructing a pyramid structure for span representation, the information of the intermediate words has more proportion than words on the two sides. To address this imbalance in the span representation, we fuse the output of the attention layer with the features of head and tail words when doing classification. We conducted experiments on nested NER datasets such as GENIA, SciERC, and ADE to validate the effectiveness of our proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call