A path-aware graph neural network for heterophily graph learning

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A path-aware graph neural network for heterophily graph learning

Similar Papers
  • Dissertation
  • Cite Count Icon 1
  • 10.14264/uql.2015.623
Some aspects of representation and learning in artificial neural networks
  • Jan 1, 1991
  • Nicholas J Redding

This thesis concerns some aspects of representation and learning in artificial neural networks. The representational issues that are dealt with here concern the concept of order, originally defined by Minsky and Papert. The relationship between the order of a neural network mapping problem and the required network fan-in is discussed, and a polynomial time algorithm to determine the order of a problem is presented. This algorithm also computes the weights required in networkswith a single layer of adjustable weights, including higher order networks and mask-perceptrons. A critical analysis of some of the work of Minsky and Papert, particularly that concerning the parity predicate, is also presented. Learning issues relating to the correct classification of all patterns in a pattern set are discussed. It has previously been recognized that minimizing an error function based on the l2 norm does not necessarily lead to the correct classification of the patterns in a pattern set even when this is possible for the network under consideration. Here it is proven that an error function based upon the l∞ norm can overcome this problem with l2 based error functions when a correctly classifying solution is desired. In addition, attention is drawn to the fact that any error function based on the l∞ norm is nonsmooth and so requires special techniques for minimization, a fact that was previously unrecognized. Finally, material is drawn from the field of nonsmooth optimization to obtain an algorithm for learning in networks with nonsmooth error functions and nonsmooth neural transfer functions.

  • Conference Article
  • Cite Count Icon 115
  • 10.24963/ijcai.2021/353
UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks
  • Aug 1, 2021
  • Jing Huang + 1 more

Hypergraph, an expressive structure with flexibility to model the higher-order correlations among entities, has recently attracted increasing attention from various research domains. Despite the success of Graph Neural Networks (GNNs) for graph representation learning, how to adapt the powerful GNN-variants directly into hypergraphs remains a challenging problem. In this paper, we propose UniGNN, a unified framework for interpreting the message passing process in graph and hypergraph neural networks, which can generalize general GNN models into hypergraphs. In this framework, meticulously-designed architectures aiming to deepen GNNs can also be incorporated into hypergraphs with the least effort. Extensive experiments have been conducted to demonstrate the effectiveness of UniGNN on multiple real-world datasets, which outperform the state-of-the-art approaches with a large margin. Especially for the DBLP dataset, we increase the accuracy from 77.4% to 88.8% in the semi-supervised hypernode classification task. We further prove that the proposed message-passing based UniGNN models are at most as powerful as the 1-dimensional Generalized Weisfeiler-Leman (1-GWL) algorithm in terms of distinguishing non-isomorphic hypergraphs. Our code is available at https://github.com/OneForward/UniGNN.

  • Research Article
  • 10.1109/embc.2018.8512871
Long-Term Depression Learning in Spinal Cord Networks.
  • Jul 1, 2018
  • Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
  • Zachary Baker + 4 more

Investigating learning in networks of spinal cord neurons can provide insight into the dynamics of connectivity in human spinal cords. It may also hold implications for developing neural prosthetics and neurocomputers. Culturing neural networks on microelectrode arrays (MEAs) allows for the repeated observation and stimulation of electrophysiological activity in vitro. Here we used MEAs to demonstrate learning in networks of spinal cord neurons. This was done by exposing E17 mouse spinal cord cultures to high frequency artificial spike trains, or tetanization. Unexpectedly, when comparing the networks' responses to low-frequency probing stimulations before and after tetanization, the cultures were found to demonstrate long-term depression (LTD). LTD was most significantly observed between 500-1000 ms after low-frequency probing. These results indicate that periodic high-frequency excitation of spinal cord networks can result in decreased synaptic efficacy.

  • Research Article
  • Cite Count Icon 17
  • 10.1016/j.neunet.2023.03.022
DropAGG: Robust Graph Neural Networks via Drop Aggregation
  • Mar 29, 2023
  • Neural Networks
  • Bo Jiang + 4 more

DropAGG: Robust Graph Neural Networks via Drop Aggregation

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/ijcnn.2006.246658
Global Reinforcement Learning in Neural Networks with Stochastic Synapses
  • Jan 1, 2006
  • Xiaolong Ma + 1 more

We have found a more general formulation of the REINFORCE learning principle which had been proposed by R. J. Williams for the case of artificial neural networks with stochastic cells ("Boltzmann machines"). This formulation has enabled us to apply the principle to global reinforcement learning in networks with deterministic neural cells but stochastic synapses, and to suggest two groups of new learning rules for such networks, including simple local rules. Numerical simulations have shown that at least for several popular benchmark problems one of the new learning rules may provide results on a par with the best known global reinforcement techniques.

  • Research Article
  • Cite Count Icon 28
  • 10.1109/tnn.2006.888376
Global Reinforcement Learning in Neural Networks
  • Mar 1, 2007
  • IEEE Transactions on Neural Networks
  • Xiaolong Ma + 1 more

In this letter, we have found a more general formulation of the REward Increment = Nonnegative Factor x Offset Reinforcement x Characteristic Eligibility (REINFORCE) learning principle first suggested by Williams. The new formulation has enabled us to apply the principle to global reinforcement learning in networks with various sources of randomness, and to suggest several simple local rules for such networks. Numerical simulations have shown that for simple classification and reinforcement learning tasks, at least one family of the new learning rules gives results comparable to those provided by the famous Rules A(r-i) and A(r-p) for the Boltzmann machines.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 4
  • 10.5194/isprs-archives-xliii-b2-2022-577-2022
DEEP CONVOLUTION NEURAL NETWORKS WITH RESNET ARCHITECTURE FOR SPECTRAL-SPATIAL CLASSIFICATION OF DRONE BORNE AND GROUND BASED HIGH RESOLUTION HYPERSPECTRAL IMAGERY
  • May 30, 2022
  • The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • A Galodha + 3 more

Abstract. Drones have been of vital importance in the fields of surveillance, mapping, and infrastructure inspection. Drones have played a vital role in acquiring high-resolution images and with the present need for precision farming, drones have helped in crop classification and monitoring various crop patterns. With the recent advancement in computational power and development of robust algorithms to carry out deep feature learning and neural network, based learning such techniques have regained prominence in contemporary research areas such as classification of common 2-D and 3-D images, object detection, etc. In our research, we propose a deep convolutional neural network architecture (CNN) for the classification of aerial images captured by drones and high-resolution Terrestrial Hyperspectral (THS or HSI) which includes 6-layers and with weights optimized along with the input layer, the convolutional layer, the max-pooling layer, the fully connected layer, softmax probability classifier, and the output layer. We have acquired THS (using Cubert-GmbH data) and drone agricultural data of seasonal crops sowed during the months of March-June for the year 2017. Crop patterns include Cabbage, Eggplant, and Tomato with varying nitrogen concentrations in the region of Bangalore, Southern India. To study the influence and impact of CNN, the ResNets model has been applied. ResNets model and architecture are combined with a deep learning network followed by a recurrent neural learning network model (RCNN). The HSI input layer with corresponding ground truth data for the region is fed into the ResNets model with a spectral and spatial residual network for the 7*7*139 input Hyperspectral Imagery (HSI) volume. The network includes two spectral and two spatial residual blocks. An average pooling layer and a fully connected layer transform into a 5*5*24 spectral-spatial feature volume further to a single output feature vector. At present we use an RMSProp optimizer for error loss minimization which when applied to the drone data was able to achieve an overall accuracy of 97.16%. Similarly, for cabbage, eggplant and tomato acquired through the same method we achieved overall accuracy at 87.619%, 89.25%, and 80.566% respectively in comparison to ground truth labels. Drones and ground-based datasets equipped with good computational techniques have become promising tools for improving the quality and efficiency of precision agriculture today.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1016/b978-0-323-96104-2.00015-4
18 - A Lagrangian framework for learning in graph neural networks
  • Oct 20, 2023
  • Artificial Intelligence in the Age of Neural Networks and Brain Computing
  • Marco Maggini + 2 more

18 - A Lagrangian framework for learning in graph neural networks

  • Research Article
  • Cite Count Icon 17
  • 10.1016/j.knosys.2021.107299
Graph neural networks with multiple kernel ensemble attention
  • Jul 24, 2021
  • Knowledge-Based Systems
  • Haimin Zhang + 1 more

Graph neural networks with multiple kernel ensemble attention

  • Research Article
  • Cite Count Icon 6
  • 10.1109/mcom.001.2200810
Graph Representation Learning for Wireless Communications
  • Jan 1, 2024
  • IEEE Communications Magazine
  • Maryam Mohsenivatani + 4 more

Wireless networks are inherently graph-structured, in which, graph representation learning can be utilized to solve complex network optimization problems. In graph representation learning, feature vectors for each entity in the network are calculated such that they could capture spatial and temporal dependencies in their local and global neighborhoods. Specifically, graph neural networks (GNNs) are powerful tools to solve these complex problems because of their expressive representation and reasoning power. In this paper, the potential of graph representation learning and GNNs in wireless networks is presented. An overview of graph representation learning is provided which covers the fundamentals and concepts such as feature design over graphs, GNNs, and their design principles. The potential of graph representation learning in wireless networks is presented via a few exemplary use cases and some initial results on the GNNbased access point selection for cell-free massive Multiple-Input Multiple-Output (MIMO) systems.

  • Research Article
  • Cite Count Icon 3
  • 10.4018/ijse.2020070103
A Comprehensive Study on Architecture of Neural Networks and Its Prospects in Cognitive Computing
  • Jul 1, 2020
  • International Journal of Synthetic Emotions
  • Sushree Bibhuprada B Priyadarshini

This paper proffers an overview of neural network, coupled with early neural network architecture, learning methods, and applications. Basically, neural networks are simplified models of biological nervous systems and that's why they have drawn crucial attention of research community in the domain of artificial intelligence. Basically, such networks are highly interconnected networks possessing a huge number of processing elements known as neurons. Such networks learn by examples and exhibit the mapping capabilities, generalization, fault resilience conjointly with escalated rate of information processing. In the current paper, various types of learning methods employed in case of neural networks are discussed. Subsequently, the paper details the deep neural network (DNN), its key concepts, optimization strategies, activation functions used. Afterwards, logistic regression and conventional optimization approaches are described in the paper. Finally, various applications of neural networks in various domains are included in the paper before concluding it.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.ijrefrig.2023.04.006
Prediction of normal boiling point and critical temperature of refrigerants by graph neural network and transfer learning
  • Apr 12, 2023
  • International Journal of Refrigeration
  • Gang Wang + 1 more

Prediction of normal boiling point and critical temperature of refrigerants by graph neural network and transfer learning

  • Research Article
  • 10.69968/ijisem.2025v4i2313-319
Deep Learning and Graph Neural Networks for Mathematical Pattern Recognition: Techniques, Challenges, and Advances
  • Jun 19, 2025
  • International Journal of Innovations in Science Engineering And Management
  • Lipika Mishra + 2 more

Complex systems are often modelled using graphs, and one of the key tasks in complex system analysis is identifying anomalies in a graph. A graph anomaly is a pattern that does not follow the typical patterns predicted by the graph's structures and/or properties. The present article provides a comprehensive review of the techniques, challenges, and advancements in the field of Deep Learning and Graph Neural Networks for Mathematical Pattern Recognition. This review highlights the effectiveness of Deep Learning (DL) and Graph Neural Networks (GNNs) in mathematical pattern recognition. Graph-based models, particularly GraphMR built on Graph2Seq, demonstrate superior performance in model accuracy and efficiency over traditional Seq2Seq methods. GNNs effectively handle structured data like ASTs and DAGs, preserving semantic and syntactic information. The integration of encoder–decoder architectures and graph-based reasoning shows significant advancements in recognizing mathematical structures. The evolution from structural methods to DL and GNN approaches underscores the progress in recognition accuracy. As ML adoption grows, the need for large, high-quality datasets becomes critical for training next-generation models.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/bibm55620.2022.9995451
Graph Neural Networks for Z-DNA prediction in Genomes
  • Dec 6, 2022
  • Artem Voytetskiy + 2 more

Deep learning methods have been successfully applied to the tasks of predicting functional genomic elements such as histone marks, transcriptions factor binding sites, non-B DNA structures, and regulatory variants. Initially convolutional neural networks (CNN) and recurrent neural networks (RNN) or hybrid CNN-RNN models appeared to be the methods of choice for genomic studies. With the advance of machine learning algorithms other deep learning architectures started to outperform CNN and RNN in various applications. Graph neural network (GNN) applications improved the prediction of drug effects, disease associations, protein-protein interactions, protein structures and their functions. The performance of GNN is yet to be fully explored in genomics. Earlier we developed DeepZ approach in which deep learning model is trained on information both from sequence and omics data. Initially this approach was implemented with CNN and RNN but is not limited to these classes of neural networks. In this study we implemented the DeepZ approach by substituting RNN with GNN. We tested three different GNN architectures - Graph Convolutional Network (GCN), Graph Attention Network (GAT) and inductive representation learning network GraphSAGE. The GNN models outperformed current state-of the art RNN model from initial DeepZ realization. Graph SAGE showed the best performance for the small training set of human Z-DNA ChIP-seq data while Graph Convolutional Network was superior for specific curaxin-induced mouse Z-DNA data that was recently reported. Our results show the potential of GNN applications for the task of predicting genomic functional elements based on DNA sequence and omics data.Availability and implementation–The code is freely available at https://github.com/MrARVO/GraphZ.

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.neucom.2022.08.069
Deep transform and metric learning network: Wedding deep dictionary learning and neural network
  • Aug 27, 2022
  • Neurocomputing
  • Wen Tang + 3 more

Deep transform and metric learning network: Wedding deep dictionary learning and neural network

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.