Abstract

Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition. In this work, we propose an attention based neural network model for joint named entity recognition and relation extraction. We explore different strategies in incorporating the alignment information to the encoder-decoder framework, and propose introducing attention mechanism to the alignment-based recurrent neural networks (RNN) models. Such attentions provide additional information to relation extraction and named entity recognition. Our independent models achieve state-of-the-art named entity recognition performance on the benchmark CoNLL04 dataset. Our joint training model further obtains 0.5% F1 absolute gain on named entity recognition and 0.9% F1 absolute improvement on relation extraction over the best models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call