Abstract

In this letter, a model-free co-design scheme of triggering-driven controller is proposed for probabilistic Boolean control networks (PBCNs) in order to achieve feedback stabilization with minimum controller efforts. Specifically, Q-learning (QL) algorithm is exploited to devise a self-triggered strategy wherein the controller update time is computed in advance by using the current state information. A new self-triggered QL (STQL) algorithm is presented to achieve the co-design of feedback controller and self-triggered scheme rendering the closed-loop system stable at a given equilibrium point. Finally, some examples are presented to demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.