Abstract
As a promising alternative to using standard (often intractable) planning techniques with Bellman equations, we propose an interesting method of optimizing POMDP controllers by probabilistic inference in a novel equivalent single-DBN generative model. Our inference approach to POMDP planning allows for (1) for application of various techniques for probabilistic inference in single graphical models, and (2) for exploiting the factored structure in a controller architecture to take advantage of natural structural constrains of planning problems and represent them compactly. Our contributions can be summarized as follows: (1) we designed a novel single-DBN generative model that ensures that the task of probabilistic inference is equivalent to the original problem of optimizing POMDP controllers, and (2) we developed several inference approaches to approximate the value of the policy when exact inference methods are not tractable to solve large-size problems with complex graphical models. The proposed approaches to policy optimization by probabilistic inference are evaluated on several POMDP benchmark problems and the performance of the implemented approximation algorithms is compared.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the AAAI Conference on Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.