Differential equations are a ubiquitous tool to study dynamics, ranging from physical systems to complex systems, where a large number of agents interact through a graph. Data-driven approximations of differential equations present a promising alternative to traditional methods for uncovering a model of dynamical systems, especially in complex systems that lack explicit first principles. A recently employed machine learning tool for studying dynamics is neural networks, which can be used for solution finding or discovery of differential equations. However, deploying deep learning models in unfamiliar settings-such as predicting dynamics in unobserved state space regions or on novel graphs-can lead to spurious results. Focusing on complex systems whose dynamics are described with a system of first-order differential equations coupled through a graph, we study generalization of neural network predictions in settings where statistical properties of test data and training data are different. We find that neural networks can accurately predict dynamics beyond the immediate training setting within the domain of the training data. To identify when a model is unable to generalize to novel settings, we propose a statistical significance test.
Read full abstract