The paper develops a simple iterative procedure for deriving linear decision rules which provide the optimal control policy for a stochastic dynamic linear system. The procedure works for a quadratic objective function with any time horizon up to and including infinity, either with or without time discounting. The role of target variables is conisidered and there is a discussion of the results which ensue if these targets are incompatible, that is, if they do not satisfy the underlying structural model. The paper concludes with some consideration of the convergence and other properties of the controlled system. THIS PAPER DEVELOPS a simple iterative method for deriving linear decision rules which provide the control policy for a stochastic dynamic linear system which is optimal for a quadratic criterion. The basic theory in economics was developed by Holt, Simon, Theil, Phillips, and others2 in the fifties and has recently been extended by Aoki [1], Chow [2 and 3], and Turnovsky [10]. The method described here is similar to that used by Chow [3] where the dynamic structure of the model is used to develop a suitable iterative procedure. This procedure is computationally simple, of low dimensionality, and may be applied to a system with any number of lags, irrespective of whether it is stable or unstable. For economic applications, the underlying system would typically be an econometric model in reduced form which has either been specially estimated as a completely linear model or has been suitably linearized. In Section 2 of the paper we derive a general procedure for solving an infinite horizon quadratic programming problem, proving both its convergence and optimality properties. In Sections 3 and 4 we discuss how this procedure may be adapted to solve finite and infinite horizon stochastic control problems and demonstrate some properties of the optimal path. Since the method produces an analytically explicit solution we are enabled to develop some further convergence properties of the infinite horizon, optimal path in Section 5. The specific control problem to be discussed in this paper is one of the following
Read full abstract