Network embedding is an important fundamental work in many network application tasks, which encodes the input network from the high-dimensional and sparse topological space into a low-dimensional and dense vector space. Recently, there has been a growing interest in embedding learning on Partially Labeled Attributed Networks (PLANs) due to the increasing occurrence of node attributes and partially available category labels in real-world networks. Semi-supervised embedding learning is a standard approach employed in PLANs, utilizing category labels to supervise the learning process. However, the semi-supervised learning procedure can fail when labels are scarce, noisy, or unreliable. Additionally, most existing embedding algorithms have not successfully integrated heterogeneous information, such as labels, attributes, and structure. To address these issues, we develop a new model, the Dual-Channel Network Embedding (DcNE), which integrates different types of network information into embeddings from a mutual information (MI) perspective. Specifically, we construct a dual-channel information propagation framework to encode the input network in semi-supervised and self-supervised learning paradigms in parallel. Furthermore, a redundancy elimination module is implemented to capture and eliminate the redundant information between the two encoders. Finally, we propose a unified optimization model that integrates the two learning paradigms to collaborate effectively. In the experiments, we demonstrate the effectiveness of DcNE in various network analysis tasks using real-world datasets, establishing its superiority over state-of-the-art baselines.
Read full abstract