Abstract

In this study, we propose a novel process control strategy by assimilating multiple neural network (MNN) and reinforcement learning (RL) to generate an MNNRL controller. The artificial neural networks (ANNs) constituting the MNN are trained using repositories of optimal control and state data to predict control actions to attain or stay at a set-point. This ability is significantly improved by developing an RL component for the MNNRL controller to provide enhanced set-point tracking, robustness against disturbances, and real-time learning to successfully adapt in the presence of persistent parameter upsets. The controller performance is examined in three different case studies utilizing the simulation of continuous processes with varying degrees of nonlinearity. Relative to both MNN and nonlinear model predictive control (NMPC), the MNNRL controller is found to provide better performance with about 32% less lower integral absolute error (IAE), and 50% reduced settling time with suppressed over-/under-shoots in controlled variables. Further, it is observed that the controller is fairly robust against disturbances, and can adapt via real-time learning to altered process dynamics caused by any persistent process upsets. The controller computation time is observed to be about an order of magnitude less in comparison to NMPC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call