Abstract

The idea of Model Predictive Control (MPC) means a wide framework that contains numerous particular approaches. If it is tackled from the side of numerical realization over a discrete time grid (horizon) , it normally applies Nonlinear Programming (NP) that, by the use of Lagrange’s Reduced Gradient Method (RGM), minimizes a cost function under various constraints. The cost function used to be a weighted sum of nonnegative differentiable terms that normally means a compromise between various, often contradictory requirements, while the constraints normally contain the dynamic model of the controlled system to express its limited abilities. The computational needs of the method strongly depend on the structure of the cost function and the model. In the case of a Moore-Penrose Pseudoinverse only the computation of the inverse of a single quadratic matrix is necessary. If only quadratic cost terms and Linear Time-invariant (LTI) dynamic models occur, we arrive at Kalman’s Linear Quadratic Regulator (LQR) that can utilize the special advantages of the Riccati equation. It was recently recognized that for a wide class of problems, in analogy with a novel solution of the inverse kinematic task for robots, the gradient of the Auxiliary Function (AF) of the problem can be directly driven to 0 by Fixed Point Iteration (FPI). However, it was found that in control problems just the calculation of the Jacobian means considerable programming and computational burden. To release it a recent solution was proposed for solving the inverse kinematic task by evading not only the inversion, but even the calculation of the Jacobian. In the present paper it is shown by the use of a nonlinear single degree of freedom paradigm that this simplification may be a viable route in solving Adaptive Receding Horizon Control (ARHC) problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call