Abstract

In this paper, a data-driven control approach is developed by reinforcement learning ( RL ) to solve the global robust optimal output regulation problem ( GROORP ) of partially linear systems with both static uncertainties and nonlinear dynamic uncertainties. By developing a proper feedforward controller, the GROORP is converted into a global robust optimal stabilization problem. A robust optimal feedback controller is designed which is able to stabilize the system in the presence of dynamic uncertainties. The closed-loop system is assured to be input-to-output stable regarding the static uncertainty as the external input. This robust optimal controller is numerically approximated via RL. Nonlinear small-gain theory is applied to show the input-to-output stability for the closed-loop system and thus solves the original GROORP. Simulation results validates the efficacy of the proposed methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call