Abstract

Many real-world networks, such as social networks, contain structural heterogeneity and experience temporal evolution. However, while there has been growing literature on network representation learning, only a few have addressed the need to learn representations for dynamic heterogeneous networks. The objective of our work in this paper is to introduce DyHNet, which learns representations for such networks and distinguishes itself from the state-of-the-art by systematically capturing (1) local node semantics, (2) global network semantics and (3) longer-range temporal associations between network snapshots when learning network representations. Through experiments on four real-world datasets from different domains, namely IMDB with 4,178 movies, AMiner with 10,674 papers, Yelp with 2,693 businesses, and DBLP with 14,376 papers, we demonstrate that our proposed method is able to show consistently better and more robust performance compared to the state-of-the-art techniques on link prediction and node classification tasks. More specifically, we are superior to the best baseline in the temporal link prediction task by approximately 13% and 15% on F1-score for the IMDB and AMiner datasets, respectively. Further, in the node classification task, our findings illustrate that the Micro F1 scores of our proposed model increase by 13% and 17% compared to the runner-up model on the Yelp and DBLP datasets, respectively.1

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.