Abstract

In this article, we propose a fast reinforcement learning (RL) control algorithm that enables online control of large-scale networked dynamic systems. RL is an effective way of designing model-free linear quadratic regulator (LQR) controllers for linear time-invariant (LTI) networks with unknown state-space models. However, when the network size is large, conventional RL can result in unacceptably long learning times. The proposed approach is to construct a compressed state vector by projecting the measured state through a projective matrix. This matrix is constructed from online measurements of the states in a way that it captures the dominant controllable subspace of the open-loop network model. Next, an RL controller is learned using the reduced-dimensional state instead of the original state such that the resulting cost is close to the optimal LQR cost. Numerical benefits as well as the cyber-physical implementation benefits of the approach are verified using illustrative examples including an example of wide-area control of the IEEE 68-bus benchmark power system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.