Abstract
Temporal knowledge graphs (TKGs) have been widely used in artificial intelligence but are usually incomplete. Recently, researchers have proposed reasoning methods to infer missing relations in TKGs. TKG reasoning is mainly divided into single-hop reasoning and multi-hop reasoning. Multi-hop reasoning takes the semantics of facts into account. However, multi-hop reasoning methods of TKGs lack a path memory component, and the reasoning results depend on good training. In addition, different adjacent feature information is assigned the same weight, which makes it difficult to distinguish their importance to the central entity. We propose a multi-hop reasoning model combining Reinforcement Learning with the ATtention mechanism (RLAT) to solve the above limitations. First, we use LSTM and attention mechanism as memory components, which are helpful to train multi-hop reasoning paths. Second, an attention mechanism with an influence factor is proposed. This mechanism measures the influence of neighbor information and provides different feature vectors. The strategy function obtained makes the agent focus on occurring relations with high frequency and then reasons multi-hop reasoning paths with higher correlation. Experimental results demonstrate that our approach outperforms most metrics compared to recent efforts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.