Abstract

Adaptive dynamic programming (ADP) technique is adopted in this work to investigate the optimal control problem of Markovian jump systems. By utilizing Bellman’s optimality principle, a discrete Hamilton Jacobi Bellman (HJB) equation is established to design the optimal controller for the system under consideration. Then, based on value iteration, a new ADP algorithm is proposed for finding the solution of the established HJB equation. It is proven that the iterative solution sequence generated by the developed ADP iterative approach under zero initial values is monotonically convergent. Neural networks are constructed to accomplish the presented value iteration ADP algorithm. At last, simulation researches for two Markovian jump systems demonstrate the effectiveness of the proposed optimal control method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call