Abstract

Compared to sentence-level relation extraction, practical document-level relation extraction (DocRE) is a more challenging task for which multi-entity problems need to be resolved. It aims at extracting relationships between two entities over multiple sentences at once while taking into account significant cross-sentence features. Learning long-distance semantic relation representation across sentences in a document, however, is a widespread and difficult task. To address the above issues, we propose a novel Self-supervised Commonsense-enhanced DocRE approach, named as SCDRE, bypassing the need for external knowledge. The methodology begins by harnessing self-supervised learning to capture the commonsense knowledge pertaining to each entity within an entity pair, drawing insights from the commonsense entailed text. This acquired knowledge subsequently serves as the foundation for transforming cross-sentence entity pairs into alias counterparts achieved by the coreference commonsense replacement. The focus then shifts to semantic relation representation learning, applied to these alias entity pairs. Through a process of entity pair rich attention fusion, these alias pairs are seamlessly and automatically translated back into the target entity pairs. This innovation harnesses self-supervised learning and contextual commonsense to distinguish SCDRE as a unique and self-contained approach, promising an enhanced ability to extract relationships from documents. We examine our model on three publicly accessible datasets, DocRED, DialogRE and MPDD, and the results show that it performs significantly better than strong baselines by 2.03% F1, and commonsense knowledge has an important contribution to the DocRE by the ablation experimental analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call