Abstract

Prior information often takes the form of parameter constraints. Bayesian methods include such information through prior distributions having constrained support. By using posterior sampling algorithms, one can quantify uncertainty without relying on asymptotic approximations. However, sharply constrained priors are not necessary in some settings and tend to limit modelling scope to a narrow set of distributions that are tractable computationally. We propose to replace the sharp indicator function of the constraint with an exponential kernel, thereby creating a close-to-constrained neighbourhood within the Euclidean space in which the constrained subspace is embedded. This kernel decays with distance from the constrained space at a rate depending on a relaxation hyperparameter. By avoiding the sharp constraint, we enable use of off-the-shelf posterior sampling algorithms, such as Hamiltonian Monte Carlo, facilitating automatic computation in a broad range of models. We study the constrained and relaxed distributions under multiple settings and theoretically quantify their differences. Application of the method is illustrated through several novel modelling examples.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.