Abstract

Several studies have focused on incorporating language models with entity descriptions to facilitate the model with a better understanding of knowledge. Existing methods usually either integrate descriptions in the pre-training stage by designing description-related tasks, or in the fine-tuning stage by directly appending description strings to the original input, this paper falls into the latter group. We separate entity descriptions from the original text and process them by another lighter module. Specifically, we use the original large model to encode the original input, while the lighter module processes the entity descriptions. We also propose a layer-wise fusion strategy to deeply couple the representations of the input and descriptions. To further improve the fusion of the two representations, we explore two auxiliary tasks: the entity-description enhancement task and the entity contrastive task. Experiments on (Open Entity, FIGER, FewRel, TACRED, SST) datasets yield respective improvements of (0.9, 1.4, 0.6, 0.5, 0.3). Utilizing ChatGPT as the description embedding method holds the potential for even more promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call