Abstract
Document-level relation extraction aims to infer complex semantic relations among entities in an entire document. Compared with the sentence-level relation extraction, document-level relational facts are expressed by multiple mentions across the sentences in a long-distance, requiring excellent reasoning. In this paper, we propose Dual-Channel and Hierarchical Graph Convolutional Networks (DHGCN), which constructs three graphs in token-level, mention-level, and entity-level to model complex interactions among different semantic representations across the document. Based on the multi-level graphs, we apply the Graph Convolutional Network (GCN) for each level to aggregate the relevant information scattered throughout the document for better inferring the implicit relations. Moreover, we propose a dual-channel encoder to capture structural and contextual information simultaneously, which also supplies the contextual representation for the higher layer to avoid losing low-dimension information. Our DHGCN yields significant improvements over the state-of-the-art methods by 2.75, 5.5, and 3.5 F1 on DocRED, CDR, and GDA, respectively, which are popular document-level relation extraction datasets. Furthermore, to demonstrate the effectiveness of our method, we evaluate DHGCN on a fine gained clinical document-level dataset Symptom-Acupoint Relation (SAR) proposed by ourselves and available at https://github.com/QiSun123/SAR. The experimental results illustrate that DHGCN is able to infer more valuable relations among entities in the document.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.