Abstract

Few-shot relation extraction (FSRE) aims to predict the relation between two entities in a sentence using a few annotated samples. Many works solve the FSRE problem by training complex models with a huge number of parameters, which results in longer processing times to obtain results. Some recent works focus on introducing relation information into Prototype Networks in various ways. However, most of these methods obtain entity and relation representations by fine-tuning large pre-trained language models. This implies that a copy of the complete pre-trained model needs to be saved after fine-tuning for each specific task, leading to a shortage of computing and space resources. To address this problem, in this paper, we introduce a light approach that utilizes prompt-learning to assist in fine-tuning model by adjusting fewer parameters. To obtain a better prototype of relation, we design a new enhanced fusion module to fuse relation information and original prototype. We conduct extensive experiments on the common FSRE datasets FewRel 1.0 and FewRel 2.0 to varify the advantages of our method, the results show that our model achieves state-of-the-art performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call