Abstract

Implementing optimal controllers on embedded systems can be challenging as it requires the solution of an optimization problem in real-time. Furthermore, the a priory verification of stability, e.g. not relying on the possibly numerical solution of an optimization problem is often not possible. We propose a non-linear control synthesis based on an approximated explicit solution of a constrained optimal control problem, which can be efficiently implemented and verified. The control law is derived based on a series expansion of an infinite horizon optimal control problem via Al’brekht‘s Method. In comparison to existing approaches we consider parametric uncertainties. The proposed method provides under certain conditions an approximated solution of the Hamilton–Jacobi–Bellman (HJB) equation. The feedback control law uses a finite number of terms of the series expansion, and therefore the evaluation does not require intensive online computation. Furthermore, the optimal control strategy does not only achieve an approximated infinite horizon performance but is also parameterized in terms of the varying parameters which are assumed to be known. We provide a proof of convergence and existence of the optimal control law. Simulation results with a non-linear quadcopter example show the effectiveness of the proposed strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call