Learning Dynamic Graph Embeddings with Neural Controlled Differential Equations.
This paper focuses on representation learning for dynamic graphs with temporal interactions. A fundamental issue is that both the graph structure and the nodes own their own dynamics, and their blending induces intractable complexity in the temporal evolution over graphs. Drawing inspiration from the recent progress of physical dynamic models in deep neural networks, we proposeGraph Neural Controlled Differential Equations (GN-CDEs), a continuous-time framework that jointly models node embeddings and structural dynamics by incorporating a graph enhanced neural network vector field with a time-varying graph path as the control signal. Our framework exhibits several desirable characteristics, including the ability to express dynamics on evolving graphs without piecewise integration, the capability to calibrate trajectories with subsequent data, and robustness to missing observations. Empirical evaluation on a range of dynamic graph representation learning tasks demonstrates the effectiveness of our proposed approach in capturing the complex dynamics of dynamic graphs.
- Research Article
8
- 10.1186/s40537-024-00918-5
- Apr 13, 2024
- Journal of Big Data
Educational big data significantly impacts education, and Massive Open Online Courses (MOOCs), a crucial learning approach, have evolved to be more intelligent with these technologies. Deep neural networks have significantly advanced the crucial task within MOOCs, predicting student academic performance. However, most deep learning-based methods usually ignore the temporal information and interaction behaviors during the learning activities, which can effectively enhance the model’s predictive accuracy. To tackle this, we formulate the learning processes of e-learning students as dynamic temporal graphs to encode the temporal information and interaction behaviors during their studying. We propose a novel academic performance prediction model (APP-TGN) based on temporal graph neural networks. Specifically, in APP-TGN, a dynamic graph is constructed from online learning activity logs. A temporal graph network with low-high filters learns potential academic performance variations encoded in dynamic graphs. Furthermore, a global sampling module is developed to mitigate the problem of false correlations in deep learning-based models. Finally, multi-head attention is utilized for predicting academic outcomes. Extensive experiments are conducted on a well-known public dataset. The experimental results indicate that APP-TGN significantly surpasses existing methods and demonstrates excellent potential in automated feedback and personalized learning.
- Research Article
24
- 10.1287/ijoc.2022.1172
- Mar 1, 2022
- INFORMS Journal on Computing
Co-movement among individual firms’ stock prices can reflect complex interfirm relationships. This paper proposes a novel method to leverage such relationships for stock price predictions by adopting inductive graph representation learning on dynamic stock graphs constructed based on historical stock price co-movement. To learn node representations from such dynamic graphs for better stock predictions, we propose the hybrid-attention dynamic graph neural network, an inductive graph representation learning method. We also extended mini-batch gradient descent to inductive representation learning on dynamic stock graphs so that the model can update parameters over mini-batch stock graphs with higher training efficiency. Extensive experiments on stocks from different markets and trading simulations demonstrate that the proposed method significantly improves stock predictions. The proposed method can have important implications for the management of financial portfolios and investment risk. Summary of Contribution: Accurate predictions of stock prices have important implications for financial decisions. In today’s economy, individual firms are increasingly connected via different types of relationships. As a result, firms’ stock prices often feature synchronous co-movement patterns. This paper represents the first effort to leverage such phenomena to construct dynamic stock graphs for stock predictions. We develop hybrid-attention dynamic graph neural network (HAD-GNN), an inductive graph representation learning framework for dynamic stock graphs to incorporate temporal and graph attention mechanisms. To improve the learning efficiency of HAD-GNN, we also extend the mini-batch gradient descent to inductive representation learning on such dynamic graphs and adopt a t-batch training mechanism (t-BTM). We demonstrate the effectiveness of our new approach via experiments based on real-world data and simulations.
- Conference Article
44
- 10.1145/3340531.3411946
- Oct 19, 2020
Dynamic graphs such as the user-item interactions graphs and financial transaction networks are ubiquitous nowadays. While numerous representation learning methods for static graphs have been proposed, the study of dynamic graphs is still in its infancy. A main challenge of modeling dynamic graphs is how to effectively encode temporal and structural information into nonlinear and compact dynamic embeddings. To achieve this, we propose a principled graph-neural-based approach to learn continuous-time dynamic embeddings. We first define a temporal dependency interaction graph(TDIG) that is induced from sequences of interaction data. Based on the topology of this TDIG, we develop a dynamic message passing neural network named TDIG-MPNN, which can capture the fine-grained global and local information on TDIG. In addition, to enhance the quality of continuous-time dynamic embeddings, a novel selection mechanism comprised of two successive steps, i.e., co-attention and gating, is applied before the above TDIG-MPNN layer to adjust the importance of the nodes by considering high-order correlation between interactive nodes' k-depth neighbors on TDIG. Finally, we cast our learning problem in the framework of temporal point processes (TPPs) where we use TDIG-MPNN to design a neural intensity function for the dynamic interaction processes. Our model achieves superior performance over alternatives on temporal interaction prediction (including tranductive and inductive tasks) on multiple datasets.
- Research Article
2
- 10.1016/j.jnlssr.2024.05.005
- Jul 22, 2024
- Journal of Safety Science and Resilience
DyHDGE: Dynamic heterogeneous transaction graph embedding for safety-centric fraud detection in financial scenarios
- Research Article
- 10.1609/aaai.v36i11.21682
- Jun 28, 2022
- Proceedings of the AAAI Conference on Artificial Intelligence
Representation learning in dynamic graphs is a challenging problem because the topology of graph and node features vary at different time. This requires the model to be able to effectively capture both graph topology information and temporal information. Most existing works are built on recurrent neural networks (RNNs), which are used to exact temporal information of dynamic graphs, and thus they inherit the same drawbacks of RNNs. In this paper, we propose Learning to Evolve on Dynamic Graphs (LEDG) - a novel algorithm that jointly learns graph information and time information. Specifically, our approach utilizes gradient-based meta-learning to learn updating strategies that have better generalization ability than RNN on snapshots. It is model-agnostic and thus can train any message passing based graph neural network (GNN) on dynamic graphs. To enhance the representation power, we disentangle the embeddings into time embeddings and graph intrinsic embeddings. We conduct experiments on various datasets and down-stream tasks, and the experimental results validate the effectiveness of our method.
- Research Article
1
- 10.4314/jasem.v27i11.35
- Nov 28, 2023
- Journal of Applied Sciences and Environmental Management
Reservoirs of natural gas and gas condensate have been proposed as a potential for providing affordable and cleaner energy sources to the global population growth and industrialization expansion simultaneously. This work evaluates reservoir simulation for production optimization using Deep Neural network - artificial neural network (DNN-ANN) model to predict the dew point pressure in gas condensate reservoirs from Field-X in the Niger Delta Region of Nigeria. The dew-point pressure (DPP) of gas condensate reservoirs was estimated as a function of gas composition, reservoir temperature, molecular weight and specific gravity of heptane plus percentage. Results obtained show that the mean relative error (MRE) and R-squared (R2) are 0.99965 and 3.35%, respectively, indicating that the model is excellent in predicting DPP values. The Deep Neural Network - Artificial Neural Network (DNN-ANN) model is also evaluated in comparison to earlier models created by previous authors. It was recommended that the DNN - ANN model developed in this study could be applied to reservoir simulation and modeling well performance analysis, reservoir engineering problems and production optimization.
- Research Article
5
- 10.1016/j.neunet.2023.11.060
- Dec 1, 2023
- Neural Networks
Black-box attacks on dynamic graphs via adversarial topology perturbations
- Research Article
17
- 10.1016/j.knosys.2021.107453
- Aug 30, 2021
- Knowledge-Based Systems
FILDNE: A Framework for Incremental Learning of Dynamic Networks Embeddings
- Research Article
5
- 10.1016/j.knosys.2024.111952
- May 16, 2024
- Knowledge-Based Systems
Backbone-based Dynamic Spatio-Temporal Graph Neural Network for epidemic forecasting
- Conference Article
3
- 10.1109/icassp49357.2023.10094834
- Jun 4, 2023
Many real-world networks such as social networks, traffic networks vary over time, which can be modeled as dynamic graphs. Despite the significant number of systems that can facilitate from the algorithmic tools over dynamic graphs, dynamic graph representation learning is an under-explored research area. Furthermore, while the fairness of algorithms is essential for their deployment in real-world systems, this issue has never been considered in the context of dynamic graphs to the best of our knowledge. Motivated by this, the present study proposes an efficient online node representation learning framework over dynamic graphs that can also mitigate bias. Specifically, the proposed technique combines different observations (graph structure and nodal attributes) of the same source (attributed graph) in a complementary way while also reducing the intrinsic bias in the learned representations. Experimental results on dynamic graphs show that the proposed online strategy can improve the group fairness measures for node classification together with comparable/better utility to the baselines.
- Research Article
26
- 10.1016/j.jestch.2018.08.010
- Sep 7, 2018
- Engineering Science and Technology, an International Journal
Deep neural network model for group activity recognition using contextual relationship
- Conference Article
62
- 10.1145/3308560.3316581
- May 13, 2019
Graph representation learning for static graphs is a well studied topic. Recently, a few studies have focused on learning temporal information in addition to the topology of a graph. Most of these studies have relied on learning to represent nodes and substructures in dynamic graphs. However, the representation learning problem for entire graphs in a dynamic context is yet to be addressed. In this paper, we propose an unsupervised representation learning architecture for dynamic graphs, designed to learn both the topological and temporal features of the graphs that evolve over time. The approach consists of a sequence-to-sequence encoder-decoder model embedded with gated graph neural networks (GGNNs) and long short-term memory networks (LSTMs). The GGNN is able to learn the topology of the graph at each time step, while LSTMs are leveraged to propagate the temporal information among the time steps. Moreover, an encoder learns the temporal dynamics of an evolving graph and a decoder reconstructs the dynamics over the same period of time using the encoded representation provided by the encoder. We demonstrate that our approach is capable of learning the representation of a dynamic graph through time by applying the embeddings to dynamic graph classification using a real world dataset of animal behaviour.
- Research Article
19
- 10.1016/j.neucom.2022.01.064
- Jan 22, 2022
- Neurocomputing
A unified structure learning framework for graph attention networks
- Book Chapter
1
- 10.1007/978-3-031-20865-2_32
- Jan 1, 2022
Graph Transformer Networks (GTN) use an attention mechanism to learn the node representation in a static graph and achieves state-of-the-art results on several graph learning tasks. However, due to the computation complexity of the attention operation, GTNs are not applicable to dynamic graphs. In this paper, we propose the Dynamic-GTN model which is designed to learn the node embedding in a continous-time dynamic graph. The Dynamic-GTN extends the attention mechanism in a standard GTN to include temporal information of recent node interactions. Based on temporal patterns interaction between nodes, the Dynamic-GTN employs an node sampling step to reduce the number of attention operations in the dynamic graph. We evaluate our model on three benchmark datasets for learning node embedding in dynamic graphs. The results show that the Dynamic-GTN has better accuracy than the state-of-the-art of Graph Neural Networks on both transductive and inductive graph learning tasks.KeywordsGraph Transformer NetworkDynamic graphNode sampling
- Research Article
1
- 10.1016/j.eswa.2024.124201
- May 14, 2024
- Expert Systems With Applications
A novel robust black-box fingerprinting scheme for deep classification neural networks
- New
- Research Article
- 10.1109/tpami.2025.3630635
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630339
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630577
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630505
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630673
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630317
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630242
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630185
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630605
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- New
- Research Article
- 10.1109/tpami.2025.3630209
- Nov 7, 2025
- IEEE transactions on pattern analysis and machine intelligence
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.