Abstract

This article presents a novel technique to achieve plant-wide performance optimization for large-scale unknown industrial processes by integrating the reinforcement learning method with the multiagent game theory. A main advantage of this technique is that plant-wide optimal performance is achieved by a distributed approach where multiple agents solve simplified local nonzero-sum optimization problems so that a global Nash equilibrium is reached. To this end, first, the plant-wide performance optimization problem is reformulated by decomposition into local optimization subproblems for each production index in a multiagent framework. Then, the nonzero-sum graphical game theory is utilized to compute the operational indices for each unit process with the purpose of reaching the global Nash equilibrium, resulting in production indices following their prescribed target values. The stability and the global Nash equilibrium of this multiagent graphical game solution are rigorously proved. The reinforcement learning methods are then developed for each agent to solve the nonzero-sum graphical game problem using data measurements available in the system in real time. The plant dynamics do not have to be known. Finally, the emulation results are given to show the effectiveness of the proposed automated decision algorithm by using measured data from a large mineral processing plant in Gansu Province, China.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call