The main objective of this study is to develop primal–dual differential dynamic programming (DDP), a model-based reinforcement learning (RL) framework that can handle constrained dynamic optimization problems. DDP has advantages of being able to provide a closed-loop policy and having computational complexity that grows linearly with respect to the time horizon. To take advantage, the DDP should consider optimality and feasibility for the disturbed state during closed-loop operations. Previous DDPs consider the feasibility only for the nominal state condition and can handle limited types of constraints. In this paper, we propose a primal–dual DDP incorporating modified augmented Lagrangian that can handle general nonlinear constraints. We pay special attention to obtain the feasible policy when active set changes due to the state perturbations, using path-following predictor–corrector approach. The developed framework method was applied to van der Pol oscillator and batch crystallization process, thereby validating the key aspects of this study.
Read full abstract