Abstract

Network embedding, also known as network representation learning, aims to represent the nodes in a network as low-dimensional, real-valued, dense vectors, so that the resulting vectors can be represented and inferred in a vector space, and can be easily used as input to machine l.earning models, which can then be applied to common applications in social networks, such as visualization tasks, node classification tasks, link prediction, community discovery, and recommendation systems. However, if we want the network embedding algorithm to achieve better results in the application of social networks, it is not feasible to consider only the network structure and ignore the rich text information. Traditional network embedding algorithms that combine textual information specify a uniform size embedding for all features, and the numerous features of a node generate a huge embedding table, which leads to an extremely expensive use of memory. In this paper, we propose a memory efficient network embedding algorithm with text information that takes into account the rich text information associated with the nodes and also automatically assigns different embedding dimensions adaptively for different feature embeddings. The finally obtained network embedding algorithm not only has the capability of network inference in vector space, but also greatly reduces the number of embedding parameters, reduces the consumption of memory resources, and improves the performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call