Abstract

This paper provides a complete derivation for LQR optimal controllers and the optimal value function using basic principles from variational calculus. As opposed to alternatives, the derivation does not rely on the Hamilton-Jacobi-Bellman (HJB) equations, Pontryagin's Maximum Principle (PMP), or the Euler Lagrange (EL) equations. Because it requires significantly less background, the approach is educationally instructive. It provides a different perspective of how and why key quantities such as the adjoint variable and Riccati equation show up in optimal control computations and their connection to the optimal value function. Additionally, the derivation presented requires fewer regularity assumptions than necessary in applying the HJB or EL equations. As with PMP, the methods in this paper apply to systems and controls that are piecewise continuous in time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call