Abstract

Output-Feedback Stochastic Model Predictive Control based on Stochastic Optimal Control for nonlinear systems is computationally intractable because of the need to solve a Finite Horizon Stochastic Optimal Control Problem. However, solving this problem leads to an optimal probing nature of the resulting control law, called dual control, which trades off benefits of exploration and exploitation. In practice, intractability of Stochastic Model Predictive Control is typically overcome by replacement of the underlying Stochastic Optimal Control problem by more amenable approximate surrogate problems, which however come at a loss of the optimal probing nature of the control signals. While probing can be superimposed in some approaches, this is done sub-optimally. In this paper, we examine approximation of the system dynamics by a Partially Observable Markov Decision Process with its own Finite Horizon Stochastic Optimal Control Problem, which can be solved for an optimal control policy, implemented in receding horizon fashion. This procedure enables maintaining probing in the control actions. We further discuss a numerical example in healthcare decision making, highlighting the duality in stochastic optimal receding horizon control.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call