Abstract

This paper is concerned with a class of optimal output consensus control problems for discrete linear multiagent systems with the partially observable system state. Since the optimal control policy depends on the full system state which is not accessible for a partially observable system, traditionally, distributed observers are employed to recover the system state. However, in many situations, the accurate model of a real-world dynamical system might be difficult to obtain, which makes the observer design infeasible. Furthermore, the optimal consensus control policy cannot be analytically solved without system functions. To overcome these challenges, we propose a data-driven adaptive dynamic programming approach that does not require the complete system inner state. The key idea is to use the input and output sequence as an equivalent representation of the underlying state. Based on this representation, an adaptive dynamic programming algorithm is developed to generate the optimal control policy. For the implementation of this algorithm, we design a neural network-based actor-critic structure to approximate the local performance indices and the control polices. Two numerical simulations are used to demonstrate the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call