Abstract

Joint extraction of entities and their relations not only depends on entity semantics but also highly correlates with contextual information and entity types. Therefore, an effective joint modelling method designed for handling information from different modalities can lead to a superior performance of the joint entity and relation extraction. Previous span-based models tended to focus on the internal semantics of a span but failed to effectively capture the interactions between the span and other modal information (such as tokens or labels). In this study, a Span-based Multi-Modal Attention Network (SMAN) is proposed for joint entity and relation extraction. The network introduces a cloze mechanism to simultaneously extract the context and span position information, and jointly models the span and label in the relation extraction stage. To determine the fine-grained associations between different modalities, a Modal-Enhanced Attention (MEA) module with two modes is designed and adopted in the modelling process. Experimental results reveal that the proposed model consistently outperforms the state-of-the-art for both entity recognition and relation extraction on the SciERC and ADE datasets, and beats other competing approaches by more than 1.42% F1 score for relation extraction on the CoNLL04 dataset. Extensive additional experiments further verify the effectiveness of the proposed model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call