When solving Hamiltonian systems using numerical integrators, preserving the symplectic structure may be crucial for many problems. At the same time, solving chaotic or stiff problems requires integrators to approximate the trajectories with extreme precision. So, integrating Hamilton's equations to a level of scientific reliability such that the answer can be used for scientific interpretation, may be computationally expensive. However, a neural network can be a viable alternative to numerical integrators, offering high-fidelity solutions orders of magnitudes faster.To understand whether it is also important to preserve the symplecticity when neural networks are used, we analyze three well-known neural network architectures that are including the symplectic structure inside the neural network's topology. Between these neural network architectures many similarities can be found. This allows us to formulate a new, generalized framework for these architectures. In the generalized framework Symplectic Recurrent Neural Networks, SympNets and HénonNets are included as special cases. Additionally, this new framework enables us to find novel neural network topologies by transitioning between the established ones.We compare new Generalized Hamiltonian Neural Networks (GHNNs) against the already established SympNets, HénonNets and physics-unaware multilayer perceptrons. This comparison is performed with data for a pendulum, a double pendulum and a gravitational 3-body problem. In order to achieve a fair comparison, the hyperparameters of the different neural networks are chosen such that the prediction speeds of all four architectures are the same during inference. A special focus lies on the capability of the neural networks to generalize outside the training data. The GHNNs outperform all other neural network architectures for the problems considered.
Read full abstract