Abstract
As a building block of information retrieval, relation extraction aims at predicting the relation type between two given entities in a piece of text. This task becomes challenging when it is confronted with long text that contains many task-unrelated tokens. Recent attempts to solve this problem have resorted to learning the relatedness among tokens. However, how to obtain appropriate graph for better relatedness representation still remains outstanding, while existing methods have room to improve. In this paper, we propose a novel latent graph learning method to enhance the expressivity of contextual information for the entities of interest. In particular, we design a dual-channel attention mechanism for multi-view graph learning and pool the learned multi-views to sift unrelated tokens for latent graph. This process can be repeated many times for refining the latent structure. We show that our method achieves superior performance on several benchmark datasets, compared to strong baseline models and prior multi-view graph learning approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.