Abstract

Markov chain Monte Carlo (MCMC) techniques are usually used to infer model parameters when closed-form inference is not feasible, with one of the simplest MCMC methods being the random walk Metropolis–Hastings (MH) algorithm. The MH algorithm suffers from random walk behaviour, which results in inefficient exploration of the target posterior distribution. This method has been improved upon, with algorithms such as Metropolis Adjusted Langevin Monte Carlo (MALA) and Hamiltonian Monte Carlo being examples of popular modifications to MH. In this work, we revisit the MH algorithm to reduce the autocorrelations in the generated samples without adding significant computational time. We present the: (1) Stochastic Volatility Metropolis–Hastings (SVMH) algorithm, which is based on using a random scaling matrix in the MH algorithm, and (2) Locally Scaled Metropolis–Hastings (LSMH) algorithm, in which the scaled matrix depends on the local geometry of the target distribution. For both these algorithms, the proposal distribution is still Gaussian centred at the current state. The empirical results show that these minor additions to the MH algorithm significantly improve the effective sample rates and predictive performance over the vanilla MH method. The SVMH algorithm produces similar effective sample sizes to the LSMH method, with SVMH outperforming LSMH on an execution time normalised effective sample size basis. The performance of the proposed methods is also compared to the MALA and the current state-of-art method being the No-U-Turn sampler (NUTS). The analysis is performed using a simulation study based on Neal’s funnel and multivariate Gaussian distributions and using real world data modeled using jump diffusion processes and Bayesian logistic regression. Although both MALA and NUTS outperform the proposed algorithms on an effective sample size basis, the SVMH algorithm has similar or better predictive performance when compared to MALA and NUTS across the various targets. In addition, the SVMH algorithm outperforms the other MCMC algorithms on a normalised effective sample size basis on the jump diffusion processes datasets. These results indicate the overall usefulness of the proposed algorithms.

Highlights

  • We propose Locally Scaled Metropolis–Hastings (LSMH), which uses the local geometry of the target to scale MH, and Stochastic Volatility Metropolis–Hastings (SVMH), which randomly scales MH

  • The results show that both Metropolis Adjusted Langevin Algorithm (MALA) and No-U-Turn sampler (NUTS) exceed the proposed algorithms on an effective sample size basis, the SVMH algorithm has similar or better predictive performance (AUC and negative log-likelihood (NLL)) than MALA and NUTS on most of the targets densities considered

  • The results show that the NUTS algorithm outperforms all the Markov chain Monte Carlo (MCMC) methods on an effective sample size basis, while the MALA

Read more

Summary

Introduction

Markov chain Monte Carlo (MCMC) algorithms have been successfully utilised in fields such cosmology, finance, and health [1,2,3,4,5,6,7,8,9] and are preferable to other approximate techniques such as variational inference because they guarantee asymptotic convergence to the target distribution [10,11]. Examples of MCMC methods include inter alia. Monte Carlo [9,18,19,20]. The execution times of MCMC algorithms are a significant issue in practice [21]. Algorithms with low running times are preferable to methods with longer run times

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.