Abstract

Transfer learning has shown promising results for transferring knowledge ofsource tasks to target tasks in natural language processing (NLP). In this paper, we investigate a multi-task and multi-view learning (MTMVL) framework for end-to-end neural relation extraction, using large noisy data as one auxiliary task for improving manually labeled data, and a dependency parsing objective as a second auxiliary task for leveraging syntax. Two views are taken for relation extraction, consisting of a joint tagging view and a novel context relation view. To capture sentence multi-level information, we explore a weighted average approach to represent shared deep bi-directional recurrent neural networks and encourage auxiliary tasks to accommodate relation extraction task via a time-related scheduled sampling strategy. We evaluate MTMVL framework on manual-labeled ACE2005 dataset, and experimental results show that our model outperforms the state-of-the-art methods, which indicates the effectiveness of multi-auxiliary information for knowledge transferring.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.