Abstract

Markov decision process (MDP) is a foundational framework of reinforcement learning advanced in sequential decision problems. Continuous-time Markov decision process (CTMDP) extends the discrete time MDP model by allowing actions to take place at any time. Prior work has little consideration on the reinforcement learning methods for solving CTMDPs. The aim of our article was to present a reinforcement learning approach based on the path of samples. For the key concept of performance potential function, a policy iteration algorithm with average reward was presented. Then, through the Robbins-Monro method, a temporal difference formula for evaluating the performance potential function was also proposed. Simulation results indicated that the presented algorithms could converge to the solution of the CTMDP problem at a proper speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call