Abstract

In this paper, we investigate a sparse optimal control of continuous-time stochastic systems. We adopt the dynamic programming approach and analyze the optimal control via the value function. Due to the non-smoothness of the L0 cost functional, in general, the value function is not differentiable in the domain. Then, we characterize the value function as a viscosity solution to the associated Hamilton–Jacobi–Bellman (HJB) equation. Based on the result, we derive a necessary and sufficient condition for the L0 optimality, which immediately gives the optimal feedback map. Especially for control-affine systems, we consider the relationship with L1 optimal control problem and show an equivalence theorem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call