Abstract

This article describes a new Monte Carlo algorithm, dynamically weighted importance sampling (DWIS), for simulation and optimization. In DWIS, the state of the Markov chain is augmented to a population. At each iteration, the population is subject to two move steps, dynamic weighting and population control. These steps ensure that DWIS can move across energy barriers like dynamic weighting, but with the weights well controlled and with a finite expectation. The estimates can converge much faster than they can with dynamic weighting. A generalized theory for importance sampling is introduced to justify the new algorithm. Numerical examples are given to show that dynamically weighted importance sampling can perform significantly better than the Metropolis–Hastings algorithm and dynamic weighting in some situations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call