Abstract

Hamiltonian Monte Carlo (HMC) is a powerful framework for sampling from high-dimensional continuous distributions. Langevin Monte Carlo (LMC) is a special case of HMC that is widely used in Deep Learning applications. Given an n-dimensional continuous density P(X), the only requirement for implementing HMC is the differentiability of the energy \(U(X) = - \log P(X)\). Like other MCMC methods (e.g. slice sampling, Swendsen-Wang cuts), HMC introduces auxiliary variables to facilitate movement in the original space. In HMC, the original variables represent position, and the auxiliary variables represent momentum. Each position dimension has a single corresponding momentum variable, so the joint space of the original and auxiliary variables has dimension 2n, twice the size of the original space. Once the momentum variables are introduced, Hamilton’s Equations are used to simulate the time evolution of a physical system with potential energy U. The properties of Hamilton’s Equations ensure that movement in the joint space preserves the distribution of P in the original space.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.