Abstract

Relational Triple Extraction (RTE) aims to extract relations and entities from unstructured text. Current RTE models using supervised learning require a large amount of labeled data, which presents a challenge for real-world applications. Therefore, the research work on Few-Shot Relational Triple Extraction (FS-RTE) has been proposed. However, the existing work cannot effectively construct accurate prototypes from a small number of samples, and it is difficult to model the dependencies between entities and relations, resulting in poor performance in relational triple extraction. In this paper, we propose a Hierarchical Prototype Optimized FS-RTE method (HPO). In particular, to mitigate prototype bias built on a small number of samples, HPO uses prompt learning to merge the information of relational labels into the text. Then, the entity-level prototypes are constructed using a span encoder to avoid label dependency between entity tokens. Finally, the hierarchical contrastive learning (HCL) method is introduced to improve the metric space between the prototypes of entities and relations, respectively. Experiments conducted on two public datasets show that HPO can significantly outperform previous state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call