Abstract
This article studies the dynamic transmission-scheduling problem of rate-limited networked control systems with multiple loops. It is critical to develop an optimal-scheduling policy because, due to limited spectrum resources and energy budgets, only partial subsystems may have access to shared channels at each time step to update the plant states affected by stochastic disturbances. An adaptive event-triggered stochastic scheduling policy is proposed based on the one-step-ahead comparison error of state estimation between the sensor and controller. By formulating the scheduling problem as a constrained multiagent partially observable Markov decision process, a multiagent reinforcement learning algorithm is proposed to search for the optimal parameters of the scheduling policy. And an edge-assisted learning architecture is introduced to facilitate its implementation. An explicit performance index of the optimal scheduling is further obtained, revealing the effects of system disturbances, observation noises, and the communication rate constraint. Moreover, two numerical examples are given to validate the proposed scheduling policy, showing that it outperforms several competitive scheduling schemes. Due to the low computational complexity, the proposed scheduling may have an advantage in large-scale heterogeneous control systems, e.g., the flow and pressure control in fluid transport pipelines.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.