Abstract
AbstractTrained deep reinforcement learning (DRL) based controllers can effectively control dynamic systems where classical controllers can be ineffective and difficult to tune. However, the lack of closed‐loop stability guarantees of systems controlled by trained DRL agents hinders their adoption in practical applications. This research study investigates the closed‐loop stability of dynamic systems controlled by trained DRL agents using Lyapunov analysis based on a linear‐quadratic polynomial approximation of the trained agent. In addition, this work develops an understanding of the system's stability margin to determine operational boundaries and critical thresholds of the system's physical parameters for effective operation. The proposed analysis is verified on a DRL‐controlled system for several simulated and experimental scenarios. The DRL agent is trained using a detailed dynamic model of a non‐linear system and then tested on the corresponding real‐world hardware platform without any fine‐tuning. Experiments are conducted on a wide range of system states and physical parameters and the results have confirmed the validity of the proposed stability analysis (https://youtu.be/QlpeD5sTlPU).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.