Abstract

“Bayesian learning works even with censored information” We often learn about a problem from information that is incomplete or censored: for example, a medical treatment may cause side effects with no indication of what the right dose should have been. Bayesian belief models are useful in such settings, but cannot be constructed using traditional methods; as a result, practitioners have developed ways of constructing them approximately. These approximations have been very successful in many application domains, yet until now have lacked theoretical support. The paper “Consistency analysis of sequential learning under approximate Bayesian inference,” by Chen and Ryzhov, links approximate Bayesian learning to stochastic approximation theory. Using this link, the authors prove – for the first time – the consistency of a suite of approximate Bayesian methods culled from the literature. One highlight is an entirely new consistency proof for Bayesian logistic regression, a well-established approximation technique that essentially treats logistic regression as if it were ordinary least squares.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.