Abstract

Control of nonlinear systems is a challenging task that often requires linearization, which limits the operating envelope. Moreover, designing a controller for such nonlinear systems requires complex tuning rules and expert system knowledge. Deep Reinforcement Learning (DRL) has been used as a controller for nonlinear systems. However, its applicability is hindered by the Sim2Real gap, an issue where the DRL agent trained in simulation does not transfer to reality. In this work, we propose using DRL for the control of nonlinear systems across the Sim2Real gap by utilizing a detailed dynamic model of the nonlinear system. To demonstrate the proposed methodology, we use a tower crane as the nonlinear system of interest. The control policy of the DRL agent is learned rather than explicitly programmed. To highlight the effectiveness of the proposed DRL-based controller, the results are compared with the PI controller. The experimental results presented demonstrate the successful Sim2Real transfer and effectiveness of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call