Abstract

This paper formulates a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. The smoothness condition softens the transition in the tasks among adjacent nodes and allows incorporating information about the graph structure into the solution of the inference problem. A diffusion strategy is devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relies on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We show in this Part I of the work, under conditions on the step-size parameter, that the adaptive strategy induces a contraction mapping and leads to small estimation errors on the order of the small step-size. The results in the accompanying Part II will reveal explicitly the influence of the network topology and the regularization strength on the network performance and will provide insights into the design of effective multitask strategies for distributed inference over networks.

Highlights

  • Distributed inference allows a collection of interconnected agents to perform parameter estimation tasks from streaming data by relying solely on local computations and interactions with immediate neighbors

  • Most prior literature focuses on single-task problems, where agents with separable objective functions need to agree on a common parameter vector corresponding to the minimizer of an aggregate sum of individual costs [2]–[11]

  • We show in this Part I of the work, under conditions on the step-size learning parameter μ, that the adaptive strategy induces a contraction mapping and that despite gradient noise, it is able to converge in the mean-square-error sense within O(μ) from the solution of the regularized problem, for sufficiently small μ

Read more

Summary

Introduction

Distributed inference allows a collection of interconnected agents to perform parameter estimation tasks from streaming data by relying solely on local computations and interactions with immediate neighbors. Many network applications require more complex models and flexible algorithms than single-task implementations since their agents may need to estimate and track multiple objectives simultaneously [12]–[23]. Networks of this kind are referred to as multitask networks. In [23], the parameter space is decomposed into two orthogonal subspaces, with one of the subspaces being common to all agents There is yet another useful way to model relationships among tasks, namely, to formulate optimization problems with appropriate regularization terms encoding these relationships [13]–[18]. The strategy developed in [13] adds squared 2-norm co-regularizers to

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call