Abstract

Graph contrastive learning has achieved rapid development in learning representations from graph-structured data, which aims to maximize the mutual information between two representations learned from different augmented views of a graph. However, maximizing the mutual information between different views without any constraints may cause encoders to capture information irrelevant to downstream tasks, limiting the efficiency of graph contrastive learning methods. To tackle these issues, we propose a Graph Contrastive Learning method with Min-max mutual Information (GCLMI). Specifically, we conduct theoretical analysis to present our learning objective. It designs a min-max principle to constrain the mutual information among multiple views, including between a graph and each of its augmented views, as well as between different augmented views. Based on the learning objective, we further construct two augmented views by separating the feature and topology information of a graph to preserve different semantic information from the graph. Subsequently, we maximize the mutual information between each augmented view and the graph while minimizing the mutual information between two augmented views, to learn informative and diverse representations. Extensive experiments are conducted on a variety of graph datasets, and experimental results show that GCLMI achieves better or competitive performance compared with state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.