Abstract

Few-shot relation classification (FSRC) focuses on recognizing novel relations by learning with merely a handful of annotated instances. Meta-learning has been widely adopted for such a task, which trains on randomly generated few-shot tasks to learn generic data representations. Despite impressive results achieved, existing models still perform suboptimally when handling hard FSRC tasks with similar categories that confuse the model to distinguish correctly. We argue this is largely due to two reasons, 1) ignoring pivotal and discriminate information that is crucial to distinguish confusing classes, and 2) training indiscriminately via randomly sampled tasks of varying difficulty. In this paper, we introduce a novel prototypical network approach with contrastive learning that learns more informative and discriminative representations by exploiting relation label information. We further design two strategies that increase the difficulty of training tasks and allow the model to adaptively learn to focus on hard tasks. By doing so, our model can better represent subtle inter-relation variance and grow up through task difficulty. Extensive experiments on three standard benchmarks demonstrate the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.