Abstract

This paper studies the multi-rate optimal control problem for a class of industrial processes, whose controlling rate will be set faster than the sampling rate sometimes. This multi-rate phenomenon makes the accurate modeling of control systems challenging and difficult. In this brief, we present a model-free self-learning control scheme for the real-time solution of this problem, combining the lifting technology and Q-learning. For the asynchronous periods, the lifting system is established first to reconstruct the input and output by stacking the control and sampling signals to a frame period, maintaining the original dynamic information. Then, Q-learning is adopted to learn the optimal control policy with the real-time data and the convergence analysis of the proposed algorithm is derived. In this way, the control actions are executed at a faster rate to obtain the better dynamic performance. Finally, a hardware-in-loop (HIL) simulation study for process industries is carried out, showing that the proposed approach has high tracking and real-time performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.