Abstract

This paper is an overview of fundamental linear–quadratic optimal control techniques used for linear dynamic systems. The presentation is suitable for undergraduate and graduate students and practicing engineers. The paper can be used by class instructors as supplemental material for undergraduate and graduate control system courses. The paper shows how to find the solution to a dynamic optimization problem: optimize an integral quadratic performance criterion along trajectories of a linear dynamic system over an infinite time period (steady-state linear–quadratic optimal control problem). The solution is obtained by solving a static optimization problem. All derivations done in the paper require only elementary knowledge of linear algebra and state space linear system analysis. The results are presented also for the observer-driven linear–quadratic steady-state optimal controller, output feedback-based linear–quadratic optimal controller, and the Kalman filter-driven linear–quadratic stochastic optimal controller. Having full understanding of derivations of the linear–quadratic optimal controller, observer-driven linear–quadratic optimal controller, optimal linear–quadratic output feedback controller, and optimal linear–quadratic stochastic controller, students and engineers will feel confident to use these controllers in numerous engineering and scientific applications. Several optimal linear–quadratic control case studies involving models of real physical systems, with the corresponding Simulink block diagrams and MATLAB codes, are included in the paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call