Abstract

ABSTRACT In this work, we demonstrate how differentiable stochastic sampling techniques developed in the context of deep reinforcement learning can be used to perform efficient parameter inference over stochastic, simulation-based, forward models. As a particular example, we focus on the problem of estimating parameters of halo occupation distribution (HOD) models that are used to connect galaxies with their dark matter haloes. Using a combination of continuous relaxation and gradient re-parametrization techniques, we can obtain well-defined gradients with respect to HOD parameters through discrete galaxy catalogue realizations. Having access to these gradients allows us to leverage efficient sampling schemes, such as Hamiltonian Monte Carlo, and greatly speed up parameter inference. We demonstrate our technique on a mock galaxy catalogue generated from the Bolshoi simulation using a standard HOD model and find near-identical posteriors as standard Markov chain Monte Carlo techniques with an increase of ∼8× in convergence efficiency. Our differentiable HOD model also has broad applications in full forward model approaches to cosmic structure and cosmological analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call