Abstract

Relation extraction (RE) tends to struggle when the supervised training data is few and difficult to be collected. In this article, we elicit relational and factual knowledge from large pretrained language models (PLMs) for few-shot RE (FSRE) with prompting techniques. Concretely, we automatically generate a diverse set of natural language templates and modulate PLM's behavior through these prompts for FSRE. To mitigate the template bias which leads to unstableness of few-shot learning, we propose a simple yet effective template regularization network (TRN) to prevent deep networks from over-fitting uncertain templates and thus stabilize the FSRE models. TRN alleviates the template bias with three mechanisms: 1) an attention mechanism over mini-batch to weight each template; 2) a ranking regularization mechanism to regularize the attention weights and constrain the importance of uncertain templates; and 3) a template calibration module with two calibrating techniques to modify the uncertain templates in the lowest-ranked group. Experimental results on two benchmark datasets (i.e., FewRel and NYT) show that our model has robust superiority over strong competitors. For reproducibility, we will release our code and data upon the publication of this article.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call