Abstract

Considering a dynamic control system with random model parameters and using the stochastic Hamilton approach stochastic open-loop feedback controls can be determined by solving a two-point boundary value problem (BVP) that describes the optimal state and costate trajectory. In general an analytical solution of the BVP cannot be found. This paper presents two approaches for approximate solutions, each consisting of two independent approximation stages. One stage consists of an iteration process with linearized BVPs that will terminate when the optimal trajectories are represented. These linearized BVPs are then solved by either approximation fixed-point equations (first approach) or Taylor-Expansions in the underlying stochastic model parameters (second approach). This approximation results in a deterministic linear BVP, which can be handled by solving a matrix Riccati differential equation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call