Abstract

This work proposes an online policy iteration procedure for the synthesis of sub-optimal control laws for uncertain Linear Time Invariant (LTI) Asymptotically Null-Controllable with Bounded Inputs (ANCBI) systems. The proposed policy iteration method relies on: a policy evaluation step with a piecewise quadratic Lyapunov function in both the state and the deadzone functions of the input signals; a policy improvement step which guarantees at the same time close to optimality (exploitation) and persistence of excitation (exploration). The proposed approach guarantees convergence of the trajectory to a neighborhood around the origin. Besides, the trajectories can be made arbitrarily close to the optimal one provided that the rate at which the the value function and the control policy are updated is fast enough. The solution to the inequalities required to hold at each policy evaluation step can be efficiently implemented with semidefinite programming (SDP) solvers. A numerical example illustrates the results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call