Abstract

This paper concerns the problem of network embedding (NE), which aims to learn low-dimensional representations for network nodes. Such dense representations offer great promises for many network analysis problems. However, existing approaches are still faced with challenges posed by the characteristics of complex real-world networks. First, for networks associated with rich content information, previous methods often learn separated content and structure representations, which requires post-processing of combination. Empirical combination strategies often make the final vectors suboptimal. Second, existing methods preserve the structure information by considering short and fixed neighborhood scope, such as the first- and/or the second-order proximities. However, it is hard to decide the neighborhood scope in complex problems. To this end, we propose a novel sequence to sequence model based NE framework referred to as Self-Translation Network Embedding (STNE). With the sampled node sequences, STNE translates each sequence itself from the content sequence to the node sequence. On the one hand, the bi-directional LSTM encoder fuses the content and structure information seamlessly from the raw input. On the other hand, high-order proximity can be flexibly learned with the memories of LSTM to capture long-range structural information. Experimental results on three real-world datasets demonstrate the superiority of STNE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call