Abstract

AbstractControl problems from engineering and economics can be represented by a continuous‐time optimal control problem with a convex cost criterion, a nonlinear random differential equation, a stochastic initial state and deterministic or stochastic controls. Thus, the existence and uniqueness of a solution of systems of first‐order differential equations with random parameters is considered first. Then the most important control laws are discussed, such as open‐loop (OL), closed‐loop (CL) as well as the open‐loop feedback (OLF) controls. Since feedback controls can be approximated very efficiently by OLF controls, for practical applications one can confine to the construction of optimal OL controls.Convex approximations of the underlying control problem are obtained then by “inner” linearization of the given control problem, that is, by a linearization of the dynamic equation. Considering the necessary and sufficient optimality condition for the convex approximation of the original problem, stochastic optimal controls may be represented by means of stochastic optimal control laws obtained by solving certain finite‐dimensional stochastic optimization problems. Using then a stochastic version of the Hamilton–Jacobi theory, stochastic optimal controls may be obtained by solving the related canonical or Hamiltonian system of first‐order differential equations, which is a two‐point boundary value problem with random parameters. Approximate solutions of the two‐point boundary value problem are constructed then by (i) discretization of the underlying probability distribution or (ii) by evaluations of the occurring expectations by means of Taylor expansions with respect to the vector of model parameters. The method is applied to the feedback control of mechanical systems (robots) under stochastic uncertainty.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call