Abstract
Effectively analyzing and mining large-scale heterogeneous information networks (HINs) by adopting network representation learning (NRL) approaches have received increasing attention. The abundant semantic and structural information contained in HINs not only facilitates network analysis and downstream tasks, but also poses special challenges to well capture that rich information. With the intention to preserve such rich yet potential information during HIN embedding, we first discuss the latent dependence existed in indirect neighbors, then study the different abilities of forward layer and backward layer of bidirectional recurrent neural network to remain semantic of HINs. And finally, we propose a novel representation learning model for HIN, namely RL4HIN. RL4HIN utilizes a skip-dependence strategy for enhancing the latent dependence between farther neighbors, and then develops a proposed weighted loss function in order to balance such difference between forward and backward layer. Extensive experiments, including node classification and visualization, have been conducted on two large- scale and real-world HINs. The experimental results show that RL4HIN significantly outperforms several state-of-the-art NRL approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.