Abstract

The optimal bang-bang control problem is considered as one of maximizing the probability that the state hits a target manifold before the outer boundary of a safe region in the control interval [0, T]; hitting the outer boundary corresponds to a breakdown of the control system, and T may or may not be finite. It is assumed that the dynamics of the controlled system can be expressed by a linear stochastic differential equation, and that all the state variables are accessible for direct measurements. Dynamic programming formulation leads to an initial and boundary value problem for the Bellman equation. A discussion is given for a simple scalar system. The initial and boundary value problem for a second-order plant 1/s 2 is solved by use of the finite-difference method. Some optimal switching curves are also demonstrated for different target manifolds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call