Abstract
Jointly extracting entities and their relations from texts is an important task in information extraction. Despite the great success, traditional models suffer from two problems. First, the same token embeddings are shared in two subtasks. It ignores the difference between semantic granularities, in which named entities are more dependent on local features and relations are semantic expressions relevant a whole sentence. Second, the interaction between two subtasks is rarely considered, which is important to encode semantic dependencies relevant to two named entities. To address the above problems, we presents a novel joint entity and relation extraction model. It constructs two independent token embedding modules for encoding features about entities and relations respectively. It enables to encode semantic representations with different granularities for named entities and entity relations. Then, a cross-attention is used to capture the interaction between two subtasks for learning semantic dependencies in a relation instance. The experimental results demonstrate that our model outperforms previous state-of-art models on several public datasets. Extensive additional experiments further confirm the effectiveness of our model.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have