Abstract

Our ability to manipulate the behavior of complex networks depends on the design of efficient control algorithms and, critically, on the availability of an accurate and tractable model of the network dynamics. While the design of control algorithms for network systems has seen notable advances in the past few years, knowledge of the network dynamics is a ubiquitous assumption that is difficult to satisfy in practice. In this paper we overcome this limitation, and develop a data-driven framework to control a complex network optimally and without any knowledge of the network dynamics. Our optimal controls are constructed using a finite set of data, where the unknown network is stimulated with arbitrary and possibly random inputs. Although our controls are provably correct for networks with linear dynamics, we also characterize their performance against noisy data and in the presence of nonlinear dynamics, as they arise in power grid and brain networks.

Highlights

  • Our ability to manipulate the behavior of complex networks depends on the design of efficient control algorithms and, critically, on the availability of an accurate and tractable model of the network dynamics

  • We address the problem of learning from data point-to-point optimal controls for complex dynamical networks

  • We present a framework to control complex dynamical networks from data generated by non-optimal experiments

Read more

Summary

Introduction

Our ability to manipulate the behavior of complex networks depends on the design of efficient control algorithms and, critically, on the availability of an accurate and tractable model of the network dynamics. Errors in the network model (i.e., missing or extra links, incorrect link weights) are unavoidable, especially if the network is identified from data[18,19] (see Fig. 1a). Identification algorithms are sometimes inaccurate and time-consuming, and several direct data-driven methods have been proposed to bypass the identification step[25] These include, among others, (modelfree) reinforcement learning[26,27], iterative learning control[28], adaptive and self-tuning control[29], and behavior-based methods[30,31]. This is due to numerical errors in the computation of the minimum-energy control which are a consequence of the ill-conditioning of the Gramian[9,20]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call