Abstract

Zero-shot relation extraction (ZSRE) is shown to become more significant in the current information extraction system, which aims at predicting relation classes that lack annotations or have just never appeared during training. Previous works focus on projecting sentences with their corresponding relation descriptions to an intermediate semantic space and searching the nearest semantic for predicting unseen classes. Though these methods can achieve sound performance, they only obtain inferior semantic information via a trivial distance metric and neglect the interaction in the instance representations. We are thus motivated to tackle these issues and propose a hierarchical contrastive learning (HCL) framework for ZSRE including projection-level and instance-level modules. Specifically, the projection-level component replaces the distance score function by contrastive loss to connect the input sentence with the relation semantic space. And the instance-level component integrates the external knowledge from sentence entities to establish new contrastive pairs for efficiently learning representations from mutual information. The experimental results on three well-known datasets demonstrate that our model surpasses the existing SOTA by at most 18.97% improvement on the F1 score when unseen classes are 15 . Moreover, our model can achieve more competitive performance alone with the increasing number of unseen classes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call