Traditionally, tuning of PID controllers is based on linear approximation of the dynamics between the manipulated input and the controlled output. The tuning is performed one loop at a time and interaction effects between the multiple single-input-single-output (SISO) feedback control loops is ignored. It is also well-known that if the plant operates over a wide operating range, the dynamic behaviour changes thereby rendering the performance of an initially tuned PID controller unacceptable. The design of PID controllers, in general, is based on linear models that are obtained by linearizing a nonlinear system around a steady state operating point. For example, in peak seeking control, the sign of the process gain changes around the peak value, thereby invalidating the linear model obtained at the other side of the peak. Similarly, at other operating points, the multivariable plant may exhibit new dynamic features such as inverse response. This work proposes to use deep reinforcement learning (DRL) strategies to simultaneously tune multiple SISO PID controllers using a single DRL agent while enforcing interval constraints on the tuning parameter values. This ensures that interaction effects between the loops are directly factored in the tuning. Interval constraints also ensure safety of the plant during training by ensuring that the tuning parameter values are bounded in a stable region. Moreover, a trained agent when deployed, provides operating condition based PID parameters on the fly ensuring nonlinear compensation in the PID design. The methodology is demonstrated on a quadruple tank benchmark system via simulations by simultaneously tuning two PI level controllers. The same methodology is then adopted to tune PI controllers for the operating condition under which the plant exhibits a right half plane multivariable direction zero. Comparisons with PI controllers tuned with standard methods suggest that the proposed method is a viable approach, particularly when simulators are available for the plant dynamics.
Read full abstract