Abstract

The study of optimal control problems defined on infinite intervals has recently been a rapidly growing area of research. These problems arise in engineering [1,2], in models of economic growth [5,16,26–28,40], in infinite discrete models of solid-state physics related to dislocations in one-dimensional crystals [3,31], and in the theory of thermodynamical equilibrium for materials [7,14,17–20,34,35,37]. In this survey, we consider discrete-time optimal control problems. Sections 2 and 3 are devoted to autonomous discretetime control systems on compact metric spaces. In Sec. 2, we present two fundamental tools in the theory of optimal control on an infinite horizon: the reduction to finite cost and the representation formula established in [9]. In Sec. 3, we present a number of results obtained in [32,33] which establish the existence of weakly optimal solutions on an infinite horizon and describe their structure. The turnpike theorem for an infinite-dimensional control system with a convex cost function [39] is considered in Sec. 4. In Sec. 5, we discuss the generalization of this turnpike result to a class of nonautonomous infinite-dimensional control systems with nonconvex cost function [41]. Section 6 is devoted to Lagrange multipliers for the discrete-time problems with a periodic cost function [15]. In Sec. 7, we study optimal control of finite-state Markov decision processes [11,13].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call