Abstract
We consider discrete-time infinite horizon deterministic optimal control problems with nonnegative cost per stage, and a destination that is cost free and absorbing. The classical linear-quadratic regulator problem is a special case. Our assumptions are very general, and allow the possibility that the optimal policy may not be stabilizing the system, e.g., may not reach the destination either asymptotically or in a finite number of steps. We introduce a new unifying notion of stable feedback policy, based on perturbation of the cost per stage, which in addition to implying convergence of the generated states to the destination, quantifies the speed of convergence. We consider the properties of two distinct cost functions: $J^*$, the overall optimal, and $\hat J$, the restricted optimal over just the stable policies. Different classes of stable policies (with different speeds of convergence) may yield different values of $\hat J$. We show that for any class of stable policies, $\hat J$ is a solution of Bel...
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.