Abstract
Real-time dynamic programming (RTDP) is an outstanding real-time algorithm for solving non-deterministic planning problems with full observability. RTDP has two key advantages comparing with other DP algorithms: first, it obtain an optimal policy without computing the whole space, second, it has a good anytime behavior. However, RTDP's convergence is slow. In this paper, we introduce RTDP(k), an algorithm based on RTDP with a similar structure. RTDP(k) improves the convergence as well as holds real-time algorithm properties. RTDP(k) updates k extended states per iteration following a "bounded propagation strategy". In Markov decision processes (MDPs), especially in stochastic shortest-path problems (SSPs), we have proved two points: first, every RTDP(k) trial terminates in a finite number of steps ,second, RTDP(k) eventually converges to an optimal policy. From a practical point of view, We show that RTDP(k) produces better solutions in the first trial and converges faster than RTDP on benchmarks for real-time search.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.