Abstract
Partially observable Markov decision process (POMDP) is commonly used to model a stochastic environment with unobservable states for supporting optimal decision making. Computing the optimal policy for a large-scale POMDP is known to be intractable. Belief compression, being an approximate solution, has recently been proposed to reduce the dimension of POMDP's belief state space and shown to be effective in improving the problem tractability. In this paper, with the conjecture that temporally close belief states could be characterized by a lower intrinsic dimension, we propose a spatio-temporal brief clustering that considers both the belief states' spatial (in the belief space) and temporal similarities, as well as incorporate it into the belief compression algorithm. The proposed clustering results in belief state clusters as sub-POMDPs of much lower dimension so as to be distributed to a set of distributed agents for collaborative problem solving. The proposed method has been tested using a synthesized navigation problem (Hallway2) and empirically shown to be able to result in policies of superior long-term rewards when compared with those based on solely belief compression. Some future research directions for extending this belief state analysis approach are also included.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.