Abstract

The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not. Models are usually separately developed for the two tasks, since sequence labeling models, the most widely used backbone for flat NER, are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels. In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task. For example, extracting entities with the \textsc{per} label is formalized as extracting answer spans to the question "{\it which person is mentioned in the text?}". This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities for different categories requires answering two independent questions. Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER. We conduct experiments on both {\em nested} and {\em flat} NER datasets. Experimental results demonstrate the effectiveness of the proposed formulation. We are able to achieve vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, along with SOTA results on flat NER datasets, i.e.,+0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0.

Highlights

  • In this paper, we propose a unified framework that is capable of handling both flat and nested Named Entity Recognition (NER) tasks

  • Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task

  • We are able to achieve a vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37,respectively on ACE04, ACE05, GENIA and KBP17, as well as flat NER datasets, i.e., +0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA and Chinese OntoNotes 4.0

Read more

Summary

Introduction

We propose a unified framework that is capable of handling both flat and nested NER tasks. The task of flat NER is commonly formalized as a sequence labeling task: a sequence labeling model (Chiu and Nichols, 2016; Ma and Hovy, 2016; Devlin et al, 2018) is trained to assign a single tagging class to each unit within a sequence of tokens This formulation is incapable of handling overlapping entities in nested NER (Huang et al, 2015; Chiu and Nichols, 2015), where multiple categories need to be assigned to a single token if the token participates in multiple entities. The task of assigning the PER(PERSON) label to “[Washington] was born into slavery on the farm of James Burroughs” is formalized as answering the question “which person is mentioned in the text?” This strategy naturally tackles the entity overlapping issue in nested NER: the extraction of two entities with different categories that overlap requires answering two independent questions. We wish that our work would inspire the introduction of new paradigms for the entity recognition task

Related Work
Nested Named Entity Recognition
Task Formalization
Query Generation
Model Backbone
Span Selection
Train and Test
Experiments on Nested NER
Baselines
Results
Datasets
Results and Discussions
Improvement from MRC or from BERT
How to Construct Queries
Zero-shot Evaluation on Unseen Labels
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call