Abstract

Process control is evolving with a rapid inclusion of artificial intelligence (AI) based methods, which have the potential to significantly boost controller performances. This development is valuable to control engineers, especially for the control of nonlinear processes plagued by higher offsets and larger delays. With this motivation, we develop in this work a new strategy utilizing AI for process control by assimilating reinforcement learning (RL) with a novel approach for inverse model control. This approach involves expressing a dynamic process model in the controlled variable domain to obtain a computationally efficient, transformed inverse model (TIM). In the developed control strategy, the TIM provides baseline control, which is subsequently improved by RL’s deep deterministic policy gradient method in the resultant TIM-RL controller. This controller is examined in three different in-silico case studies of varying nonlinearity and complexity. Relative to nonlinear model predictive control, the TIM-RL controller is observed to have superior performance with more than 30% reduction in integral absolute error, 45% lower settling time, and about an order of magnitude smaller latency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.