Abstract

Temporal knowledge graph (TKG) reasoning aims to infer the missing links from the massive historical facts. One of the big issues is that how to model the entity evolution from both the local and especially global perspectives. The primary temporal dependency models often fail to disentangle both perspectives due to the lack explicit annotations to distinguish the boundary of these two representations. To address these limitations, we propose a contrastive learning framework to Disentangle Local and Global perspectives for TKG Reasoning with self-supervision framework (DLGR). Our proposed DLGR can jointly utilize the local and global perspectives on two separate graphs and disentangle them in a self-supervised manner. Firstly, we construct a temporal subgraph and a temporal unified graph to effectively learn the local and global perspective representations, respectively. Second, we extract proxies regarding the different neighbors as pseudo labels to supervise the local and global disentanglement in a contrastive manner. Finally, we adaptively fuse the learned two perspective representations for TKG reasoning. The empirical results show that our DLGR significantly outperforms other baselines (e.g., compared to the strong baseline HGLS, our DLGR achieves 4.3%, 3.4%, 1.6% and 1.1% improvements on ICEWS14, ICEWS18, YAGO and WIKI using MRR).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call