Abstract

Few-shot knowledge graph completion (FKGC) task aims to infer missing entities or relations by using few-shot support instances in the knowledge graph. Existing FKGC methods focus on the learning of few-shot relation representations, which are obtained by aggregating the neighbor information of each entity. However, most of these models take the entity's neighbor relations and entities as the same hierarchy and do not make fine-grained distinctions, resulting in entity embeddings with low expressiveness, which may further decrease the quality of learned few-shot relation embeddings. Moreover, many of those models directly use the concatenation of the entity embeddings as the relation representations, and neglect the valuable interaction between relations. In this paper, we propose a fine-grained relational learning framework IDEAL for few-shot knowledge graph completion task. Specifically, we first propose a unique hierarchical attention encoder to aggregate the neighbor information of each entity from two levels, i.e., the entity-relation level and the relation-entity level. Then a relation recoding validator is proposed to formulate the interaction between different relations. Instead of obtaining the few-shot relation representations by using the entity embeddings, the relation recoding validator module aggregates the neighbor relations of each entity to encode the few-shot relation, which can reduce the over-dependence on specific entities in the few-shot relation encoding phase. The relation recoding module is also extended with respect to the excellent performance of the transformer in modeling sequence information. We then introduce a transformer encoder to extract underlying and valuable sequence information between relations. Extensive experiments are conducted on two datasets, i.e., NELL and Wiki. The experimental results demonstrate that our model outperforms state-of-the-art FKGC methods. Besides, we devise the ablation study to demonstrate the effectiveness of each key component. The case study also shows the interpretability of our model intuitively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.