Abstract

In this paper, a model-free and effective approach is proposed to solve infinite horizon optimal control problem for affine nonlinear systems based on adaptive dynamic programming technique. The developed approach, referred to as the actor-critic structure, employs two multilayer perceptron neural networks to approximate the state-action value function and the control policy, respectively. It uses data collected arbitrarily from any reasonable sampling distribution for policy iteration. In the policy evaluation phase, a novel objective function is defined for updating the critic network, and thus makes the critic network converge to the Bellman equation directly rather than iteratively. In the policy improvement phase, the action network is updated to minimize the outputs of the critic network. The two phases alternate until no more improvement of the control policy is observed, such that the optimal control policy is achieved. Two simulation examples are provided to show the effectiveness of the approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call