Abstract

Relation extraction (RE) tends to struggle when the supervised training data is few and difficult to be collected. In this article, we elicit relational and factual knowledge from large pretrained language models (PLMs) for few-shot RE (FSRE) with prompting techniques. Concretely, we automatically generate a diverse set of natural language templates and modulate PLM's behavior through these prompts for FSRE. To mitigate the template bias which leads to unstableness of few-shot learning, we propose a simple yet effective template regularization network (TRN) to prevent deep networks from over-fitting uncertain templates and thus stabilize the FSRE models. TRN alleviates the template bias with three mechanisms: 1) an attention mechanism over mini-batch to weight each template; 2) a ranking regularization mechanism to regularize the attention weights and constrain the importance of uncertain templates; and 3) a template calibration module with two calibrating techniques to modify the uncertain templates in the lowest-ranked group. Experimental results on two benchmark datasets (i.e., FewRel and NYT) show that our model has robust superiority over strong competitors. For reproducibility, we will release our code and data upon the publication of this article.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.