Abstract

In reality, any economic phenomenon occurs in an uncertain environment. We now consider optimal control for such dynamic systems. General rules are established for finite time-horizon optimal control problems of linear discrete-time systems with additive random disturbances both in perfect information cases (Section 5.1) and in imperfect information cases (Section 5.2). We also derive the optimal control rules for linear discrete-time systems with stochastic coefficients as well as additive disturbances (Section 5.3). Our approach is based primarily on the optimality principle in dynamic programming, except the end of Section 5.3 where we comment on Lagrange multiplier methods applicable to an infinite horizon problem. Some stochastic optimal control rules are found to be the same as for the corresponding nonstochastic systems in which additive random disturbances are suppressed. This is called the certainty equivalence, and we shall discuss the principle further in relation to Theil’s strategy in Section 5.4. Finally, in Section 5.5, macroeconomic applications of our control rules will be presented together with other related control methodologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call