Abstract

We present an extended linear quadratic regulator (LQR) design for continuous-time linear time invariant (LTI) systems in the presence of exogenous inputs with a novel feedback control structure. We first propose a model-based solution with cost minimization guarantees for states and inputs using dynamic programming (DP) that out-performs classical LQR with exogenous inputs. The control law consists of a combination of the optimal state feedback and an additional optimal term which is dependent on the exogenous inputs. The control gains for the two components are obtained by solving a set of matrix differential equations. We provide these solutions for both finite horizons and steady state cases. In the second part of the paper, we formulate a reinforcement learning (RL) based algorithm which does not need any model information except the input matrix, and can compute approximate steady-state extended LQR gains using measurements of the states, the control inputs, and the exogenous inputs. Both model-based and data-driven optimal control algorithms are tested with a numerical example under different exogenous inputs showcasing the effectiveness of the designs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call