Abstract

Possessing efficient supervisory control systems is crucial for maintaining the desired operational performance of complex industrial processes. Several challenges face the developers of these systems, such as requiring accurate physical models, dealing with the variability and uncertainty of process operating conditions and coordinating between local controllers to reach desired global performance. This paper proposes an intelligent supervisory control approach based on causal reinforcement learning (CRL) to effectively manipulate the controllers’ setpoints of the process in a way that optimizes its key performance indicators (KPIs), thereby improving the energy efficiency of the process. The approach adopts deep reinforcement learning (DRL) to develop an efficient control policy through interaction with a process simulation. The DRL training history is then exploited using interpretable machine learning and process mining to build a discrete event system (DES) model, in the form of a state-event graph. The DES model identifies causal relationships between events and provides interpretability to the control policy developed by the DRL method. The DES discovered is exploited as a Markov decision process to apply the Q-learning algorithm as a CRL supervisor. The supervisor incorporates causal knowledge into its training process, thus improving the DRL control policy developed and identifying the event paths that optimize the process’s KPIs. The proposed approach is validated using two heat recovery systems in a pulp & paper mill. It successfully achieves a control policy that reduces energy consumption by up to 15.6% for the first system and 5.02% for the second, compared to the expert’s baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call