Abstract

The shortage of labeled data has been a long-standing challenge for relation extraction (RE) tasks. Semi-supervised RE (SSRE) is a promising way through annotating unlabeled samples with pseudolabels as additional training data. However, some pseudolabels on unlabeled data might be erroneous and will bring misleading knowledge into SSRE models. For this reason, we propose a novel adversarial multi-teacher distillation (AMTD) framework, which includes multi-teacher knowledge distillation and adversarial training (AT), to capture the knowledge on unlabeled data in a refined way. Specifically, we first develop a general knowledge distillation (KD) technique to learn not only from pseudolabels but also from the class distribution of predictions by different models in existing SSRE methods. To improve the robustness of the model, we further empower the distillation process with a language model-based AT technique. Extensive experimental results on two public datasets demonstrate that our framework significantly promotes the performance of the base SSRE methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.