One of the major concentrated activities of the past decade in control theory has been the development of the so-called control theory, which addresses the issue of worst-case controller design for linear plants subject to unknown disturbances and plant uncertainties. Among the different time-domain approaches to this class of worst-case design problems, the one that uses the framework of dynamic, differential game theory stands out to be the most natural. This is so because the original H-infinity control problem (in its equivalent time-domain formulation) is in fact a minimax optimization problem, and hence a zero-sum game, where the controller can be viewed as the minimizing player and disturbance as the maximizing player. Using this framework, the authors present in this book a complete theory that encompasses continuous-time as well as discrete-time systems, finite as well as infinite horizons, and several different measurement schemes, including closed loop perfect state, delayed perfect state, samples state, closed-loop imperfect state, delayed imperfect state and sampled imperfect state information patterns. They also discuss extensions of the linear theory to nonlinear systems, and derivation of the lower dimensional controller for systems with regularly and singularly perturbed dynamics. This is the second edition of a 1991 book with the same title, which, besides featuring a more streamlined presentation of the results included in the first edition, and at places under more refined conditions, also contains substantial new material, reflecting new developments in the field since 1991. Among these are the nonlinear theory; connections between H-infinity-optimal control and risk sensitive stochastic control problems; H-infinity filtering for linear and nonlinear systems; and robustness considerations in the presence of regular and singular perturbations. Also included are a rather detailed description of the relationship between frequency-and time-domain approaches to robust controller design, and a complete set of results on the existence of value and characterization of optimal policies in finite- and infinite-horizon LQ differential games. The authors believe that the theory is now at a stage where it can easily be incorporated into a second-level graduate course in a control curriculum, that would follow a basic course in linear control theory covering LQ and LQG designs. The framework adopted in this book makes such an ambitious plan possible. For the most part, the only prerequisite for the book is a basic knowledge of linear control theory. No background in differential games, or game theory in general, is required, as the requisite concepts and results have been developed in the book at the appropriate level. The book is written in such a way that makes it possible to follow the theory for the continuous- and discrete-time systems independently (and also in parallel).
Read full abstract