Abstract
The intention of this work is to describe and examine a differential dynamic programming (DDP) algorithm for constrained, discrete-time optimal control. This algorithm has performed successfully on a large-scale reservoir control problem [11]. The present paper gives conditions under which convergence to the stationary policy is assured. The convergence demonstration hinges upon a notion which we refer to as the Kuhn-Tucker condition. Strategies generated to satisfy this condition determine policies which satisfy the conventional Kuhn-Tucker condition. This observation may be of wider importance in discrete optimal control theory, for the stagewise condition might be a convenient criterion for constructing strategies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have