Abstract

Discretely controlled continuous systems constitute a special class of continuous-time hybrid dynamical systems where timely switching to alternative control modes is used for dynamic optimization in uncertain environments. Each mode implements a parametrized feedback control law until a stopping condition triggers due to the activation of a constraint related to states, controls, or disturbances. For optimal operation under uncertainty, a novel simulation-based algorithm that combines dynamic programming with event-driven execution and Gaussian processes is proposed to learn a switching policy for mode selection. To deal with the size/dimension of the state space and a continuum of control mode parameters, Bayesian active learning is proposed using a utility function that trades off information content with policy improvement. Probabilistic models of the state transition dynamics following each mode execution are fitted upon data obtained by increasingly biasing operating conditions. Throughput maximization in a hybrid chemical plant is used as a representative case study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call