Abstract

In reinforcement learning, large state and action spaces make the estimation of value functions impractical, so a value function is often represented as a linear combination of basis functions whose linear coefficients constitute parameters to be estimated. However, preparing basis functions requires a certain amount of prior knowledge and is, in general, a difficult task. To overcome this difficulty, an adaptive basis function construction technique has been proposed by Keller recently, but it requires excessive computational cost. We propose an efficient approach to this difficulty, in which the problem of approximating the value function is decomposed into a number of subproblems, each of which can be solved with small computational cost. Computer experiments show that the CPU time needed by our method is much smaller than that by the existing method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.