Abstract

Effectively analyzing and mining large-scale heterogeneous information networks (HINs) by adopting network representation learning (NRL) approaches have received increasing attention. The abundant semantic and structural information contained in HINs not only facilitates network analysis and downstream tasks, but also poses special challenges to well capture that rich information. With the intention to preserve such rich yet potential information during HIN embedding, we first discuss the latent dependence existed in indirect neighbors, then study the different abilities of forward layer and backward layer of bidirectional recurrent neural network to remain semantic of HINs. And finally, we propose a novel representation learning model for HIN, namely RL4HIN. RL4HIN utilizes a skip-dependence strategy for enhancing the latent dependence between farther neighbors, and then develops a proposed weighted loss function in order to balance such difference between forward and backward layer. Extensive experiments, including node classification and visualization, have been conducted on two large- scale and real-world HINs. The experimental results show that RL4HIN significantly outperforms several state-of-the-art NRL approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call