Abstract

Abstract The integration of reinforcement learning (RL) and model predictive control (MPC) is promising for solving nonlinear optimization problems in an efficient manner. In this paper, a digital receding horizon learning controller is proposed for continuous-time nonlinear systems with control constraints. The main idea is to develop a digital design for RL with actor-critic design (ACD) in the framework of MPC, to realize near-optimal control of continuous-time nonlinear systems. Different from classic RL for continuous-time systems, the actor adopted is learned in discrete-time steps, while the critic evaluates the learned control policy continuously in the time domain. Moreover, we use soft barrier functions to deal with control constraints and the robustness of the actor-critic network is proven. A simulation example is considered to show the effectiveness of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call