Abstract

AbstractThe Dirichlet process prior (DPP) is used to model an unknown probability distribution, F. This eliminates the need for parametric model assumptions, providing robustness in problems where there is significant model uncertainty. Two important parametric techniques for learning are extended to this non‐parametric context for the first time. These are (i) sequential stopping, which proposes an optimal stopping time for on‐line learning of F using i.i.d. sampling; and (ii) stabilized forgetting, which updates the DPP in response to changes in F, but without the need for a formal transition model. In each case, a practical and highly tractable algorithm is revealed, and simulation studies are reported. Copyright © 2007 John Wiley & Sons, Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.