In recent years, Graph Neural Networks (GNNs) have seen notable success in fields such as recommendation systems and natural language processing, largely due to the availability of vast amounts of data and powerful computational resources. GNNs are primarily designed to work with graph data that involve pairwise relationships. However, in many real-world networks, the relationships between entities are complex and go beyond simple pairwise connections, as seen in scientific collaboration networks, protein networks, and similar domains. If these complex relationships are directly represented as pairwise relationships using graph structures, it can lead to information loss. A hypergraph, as a special kind of graph-structured data, can represent higher-order relationships that cannot be fully captured by graphs, thereby addressing the limitations of graphs. In light of this, researchers have begun to focus on how to design neural networks on hypergraphs, leading to the proposal of hypergraph neural network (HGNN) models for downstream tasks. Therefore, this paper reviews the existing hypergraph neural network models. The review is conducted from two perspectives: spectral analysis methods and neural network methods on hypergraphs, discussing both unfolded and non-unfolded methods, and further subdividing them based on their algorithm characteristics and application scenarios. Subsequently, the design concepts of various algorithms are analyzed and compared, and the advantages and disadvantages of each type of algorithm are summarized based on experimental results. Finally, potential future research directions in hypergraph learning are discussed.
Read full abstract