Abstract

In this article, a new model-free approach is proposed to solve the output regulation problem for networked control systems, where the system state can be lost in the feedback process. The goal of the output regulation is to design a control law that can make the system achieve asymptotic stability of the tracking error while maintaining the stability of the closed-loop system. The solvability of the output regulation problem depends on the solvability of a set of matrix equations called the regulator equations. First, a restructured dynamic system is established by using the Smith predictor; then, an off-policy algorithm based on reinforcement learning is developed to calculate the feedback gain using only the measured data when dropout occurs. Based on the solution to the feedback gain, a model-free solution is provided for solving the forward gain using the regulator equations. The simulation results demonstrate the effectiveness of the proposed approach for discrete-time networked systems with unknown dynamics and dropout.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.