Considers the problem of designing a controller that minimizes the worst case peak-to-peak gain of a closed-loop system. In particular, we concentrate on the case where the controller has access to the state of a linear plant and it possibly knows the maximal disturbance input amplitude. We apply the principle of optimality and derive a dynamic programming formulation of the optimization problem. Under mild assumptions, we show that, at each step of the dynamic program, the cost to go has the form of a gauge function and can be recursively determined through simple transformations. We study both the finite horizon and the infinite horizon case under different information structures. The proposed approach allows us to encompass and improve earlier results based on viability theory. In particular, we present a computational scheme alternative to the standard bisection algorithm, or gamma iteration, that allows us to compute the exact value of the worst case peak-to-peak gain for any finite horizon. We show that the sequence of finite horizon optimal costs converges, as the length of the horizon goes to infinity, to the infinite horizon optimal cost. The sequence of such optimal costs converges from below to the optimal performance for the infinite horizon problem. We also show the existence of an optimal state feedback strategy that is globally exponentially stabilizing and derive suboptimal globally exponentially stabilizing strategies from the solutions of finite horizon problems.
Read full abstract