Abstract

Rougher flotation, composed of unit processes operating at a fast time scale and economic performance measurements known as operational indices measured at a slower time scale, is very basic and the first concentration stage for flotation plants. Optimizing operational process for rougher flotation circuits is extremely important due to high economic profit arising from the optimality of operational indices. This paper presents a novel off-policy Q-learning method to learn the optimal solution to rougher flotation operational processes without the knowledge of dynamics of unit processes and operational indices. To this end, first, the optimal operational control for dual-rate rougher flotation processes is formulated. Second, H∞ tracking control problem is developed to optimally prescribe the set-points for the rougher flotation processes. Then, a zero-sum game off-policy Q-learning algorithm is proposed to find the optimal set-points by using measured data. Finally, simulation experiments are employed to show the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.