Abstract

Temporal knowledge graphs (TKGs) have been widely used in artificial intelligence but are usually incomplete. Recently, researchers have proposed reasoning methods to infer missing relations in TKGs. TKG reasoning is mainly divided into single-hop reasoning and multi-hop reasoning. Multi-hop reasoning takes the semantics of facts into account. However, multi-hop reasoning methods of TKGs lack a path memory component, and the reasoning results depend on good training. In addition, different adjacent feature information is assigned the same weight, which makes it difficult to distinguish their importance to the central entity. We propose a multi-hop reasoning model combining Reinforcement Learning with the ATtention mechanism (RLAT) to solve the above limitations. First, we use LSTM and attention mechanism as memory components, which are helpful to train multi-hop reasoning paths. Second, an attention mechanism with an influence factor is proposed. This mechanism measures the influence of neighbor information and provides different feature vectors. The strategy function obtained makes the agent focus on occurring relations with high frequency and then reasons multi-hop reasoning paths with higher correlation. Experimental results demonstrate that our approach outperforms most metrics compared to recent efforts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call