Abstract
Output-Feedback Stochastic Model Predictive Control based on Stochastic Optimal Control for nonlinear systems is computationally intractable because of the need to solve a Finite Horizon Stochastic Optimal Control Problem. However, solving this problem leads to an optimal probing nature of the resulting control law, called dual control, which trades off benefits of exploration and exploitation. In practice, intractability of Stochastic Model Predictive Control is typically overcome by replacement of the underlying Stochastic Optimal Control problem by more amenable approximate surrogate problems, which however come at a loss of the optimal probing nature of the control signals. While probing can be superimposed in some approaches, this is done sub-optimally. In this paper, we examine approximation of the system dynamics by a Partially Observable Markov Decision Process with its own Finite Horizon Stochastic Optimal Control Problem, which can be solved for an optimal control policy, implemented in receding horizon fashion. This procedure enables maintaining probing in the control actions. We further discuss a numerical example in healthcare decision making, highlighting the duality in stochastic optimal receding horizon control.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.