Abstract

The expected value of a multiplicative performance criterion, represented by the exponential of a quadratic function of the state and control variables, is minimized subject to a discrete stochastic linear system with additive Gaussian measurement and process noise. This cost function, which is a generalization of the mean quadratic cost criterion, allows a degree of shaping of the probability density function of the quadratic cost criterion. In general, the control law depends upon a gain matrix which operates linearly on the smoothed history of the state vector from the initial to the current time. This gain matrix explicitly includes the covariance of the estimation errors of the entire state history. The separation theorem holds although the certainty equivalence principle does not. Two special cases are of importance. The first occurs when only the terminal state is costed. A feedback control law, linear in the current estimate of the state, results where the feedback gains are functionally dependent upon the error covariance of the current state estimate. The second occurs if all the intermediate states are costed but there is no process noise except for an initial condition uncertainty. A feedback law results which depends not only upon the current dynamical state estimate but also on an additional vector which is path dependent.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.