Abstract

Canonical transformation plays a fundamental role in simplifying and solving classical Hamiltonian systems. We construct flexible and powerful canonical transformations as generative models using symplectic neural networks. The model transforms physical variables towards a latent representation with an independent harmonic oscillator Hamiltonian. Correspondingly, the phase space density of the physical system flows towards a factorized Gaussian distribution in the latent space. Since the canonical transformation preserves the Hamiltonian evolution, the model captures nonlinear collective modes in the learned latent representation. We present an efficient implementation of symplectic neural coordinate transformations and two ways to train the model. The variational free energy calculation is based on the analytical form of physical Hamiltonian. While the phase space density estimation only requires samples in the coordinate space for separable Hamiltonians. We demonstrate appealing features of neural canonical transformation using toy problems including two-dimensional ring potential and harmonic chain. Finally, we apply the approach to real-world problems such as identifying slow collective modes in alanine dipeptide and conceptual compression of the MNIST dataset.

Highlights

  • The inherent symplectic structure of classical Hamiltonian mechanics has profound theoretical and practical implications [1]

  • We demonstrate the ability of this method first by analyzing toy problems and by applying it to real-world problems, such as identifying and interpolating slow collective modes of the alanine dipeptide molecule and MNIST database images

  • Canonical transformations which preserve the symplectic symmetry in the phase space have been a key technique for simplifying and solving Hamiltonian dynamics

Read more

Summary

INTRODUCTION

The inherent symplectic structure of classical Hamiltonian mechanics has profound theoretical and practical implications [1]. Several approaches of deep learning have been proposed to identify a nonlinear coordinate transformation of dynamical systems [16,17,18,19,20] Parallel to these efforts, it is an active research direction to extract slow features in general time-series data [14,21,22] within the community of machine learning. We present learning algorithms and discuss applications of the neural canonical transformation on the extraction of slow collective variables of the physical and realistic dataset. There have been more preprints on related topics [27,28,29,30,31] which aim at improving the performance in tasks of machine learning by imposing physics-motivated inductive biases in the design of neural network.

Canonical transformation of Hamiltonian systems
Normalizing flow models
Connections between canonical transformation and normalizing flow models
CANONICAL TRANSFORMATION USING NORMALIZING FLOW MODELS
Model architectures
Neural point transformations
Latent-space Hamiltonian and prior distribution
Training approaches
Variational approach
Maximum likelihood estimation
Applications
Thermodynamics and excitation spectra
Identifying collective variables from slow modes
EXAMPLES
Ringworld
Harmonic chain
Alanine dipeptide
MNIST handwritten digits
DISCUSSIONS
Linear symplectic transformation
Findings
Continuous symplectic flow
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call