Abstract

Existing graph neural networks (GNNs) associate nodes in networks with specific samples in datasets, and thus ignore the conceptual information hidden in object-attribute clusters in datasets. Besides, processing data without structural information is a problem for GNNs since structural information is the input of networks. In this paper, we aim to integrate conceptual information into the message passing of GNNs. To this end, concept lattice theory is fused into existing GNNs. A concept lattice is a powerful tool for describing generalization and specialization relations between formal concepts. And formal concepts, basic elements of concept lattices, effectively explain dependencies between features and samples. On this basis, we propose a new GNN framework induced by a concept lattice to overcome the intrinsic limitations of GNNs. And the novel GNN framework not only joins conceptual information into the message passing but also enables a GNN to process data with or without structural information. Furthermore, the proposed framework is validated under transductive and inductive learning conditions, respectively. The experimental results show that GNNs induced by concept lattices can handle the information hidden in datasets effectively and improve classification accuracies on most benchmark datasets over previous methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call