Abstract

Advances in computational speed have enabled the development of many Bayesian probabilistic models due to Markov-Chain-Monte-Carlo (MCMC) posterior sampling methods. These models includes Bayesian hierarchical regression methods, which use group level information to inform individual asset predictions. Hierarchical models are increasingly used for prognostics as they recognise that the parameter estimates for an individual asset may be rationally influenced by data from other similar assets. Larger and high dimensional datasets require more efficient sampling methods for calculations, than traditional MCMC techniques. Hamiltonian Monte Carlo (HMC) has been used across many fields to address high dimensional, sparse, or non-conjugate data. Due to the need to find the posterior derivative and the flexibility in the tuning parameters, HMC is often difficult to hand code. We investigate a probabilistic programming language, Stan, which allows the implementation of HMC sampling, with particular focus on Bayesian hierarchical models in prognostics. The benefits and limitations for HMC using Stan are explored and compared to the widely used Gibbs Sampler and Metropolis-Hastings (MH) algorithm. Results are demonstrated using three case studies on lithiumion batteries. Stan reduced coding complexity and sampled from posterior distributions more efficiently than parameters sampled with the Metropolis-Hastings algorithm. HMC sampling became less efficient with increasing data-size and hierarchical complexity, due to high curvature in the posterior distribution. Stan was shown to be a robust language which allows for easier inference to be made in the Bayesian paradigm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call