Abstract

Large-scale complex networks show complex nonlinear relationships among objects, such as the social relationships in the real world, the citation relationship among papers and the interactions among proteins in biology. The analysis of complex network systems make it possible to reveal network structures, information disseminating laws, and communication patterns. Network representation learning (NRL) algorithms focus on mapping the original network structure information to a low-dimensional vector space through a series of operations under the premise of maximally retaining the network structure. In order to analyze current representative NRL algorithms effectively to provide valuable references for other researchers, we built an experimental platform to perform and test the NRL algorithms based on matrix factorization, the NRL algorithms based on shallow neural network and the NRL algorithms based on deep neural network, with datasets on Collaboration Network, Social Network and Citation Network. We implemented a series of comprehensive experiments, based on metrics include precision@k, micro-F1 and macro-F1. Our experiments include network reconstruction, vertex classification, and link prediction, and show readers principles, performances and applications of typical NRL algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call