Abstract

Abstract In model predictive control, the control action is found at each sampling time by solving an online optimization problem. Computationally, this step is very demanding, especially if compared to the evaluation of traditional control laws. This has limited the application of model predictive control to systems with slow dynamics for many years. Recently, several methods have been proposed in the literature which promise a substantial reduction of the computation time by either running the computation in parallel (distributed model predictive control) or exploiting the problem structure (fast model predictive control). A combination of these methods has not yet been considered in the literature. To achieve this goal, different optimization techniques need to be employed at once. The order of how these methods are applied matters. This paper considers fast distributed model predictive control combining the alternating direction method of multipliers (ADMM), the interior point method (IPM) and the Riccati iteration for a particular class of multi-agent systems for which the order of the methods can be arbitrarily changed. This leads to two different solver schemes where a trade-off arises between computation time and number of communications required to reach consensus. A simplified problem involving the formation control of a fleet of vehicles is considered at the end.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call