Abstract

Part I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relied on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We examined the first-order, the second-order, and the fourth-order stability of the multitask learning algorithm. The results identified conditions on the step-size parameter, regularization strength, and data characteristics in order to ensure stability. This Part II examines steady-state performance of the strategy. The results reveal explicitly the influence of the network topology and the regularization strength on the network performance and provide insights into the design of effective multitask strategies for distributed inference over networks.

Highlights

  • As pointed out in Part I [2] of this work, most prior literature on distributed inference over networks focuses on single-task problems, where agents with separable objective functions need to agree on a common parameter vector corresponding to the minimizer of an aggregate sum of individual costs [3]– [13]

  • In order to exploit the smoothness prior, we formulated the inference problem as the minimization of the aggregate sum of individual costs regularized by a term promoting smoothness, known as the graph-Laplacian regularizer [28], [29]

  • SIMULATION RESULTS We consider a connected network of N = 15 nodes and M = 5 with the topology shown in Fig. 1

Read more

Summary

Introduction

As pointed out in Part I [2] of this work, most prior literature on distributed inference over networks focuses on single-task problems, where agents with separable objective functions need to agree on a common parameter vector corresponding to the minimizer of an aggregate sum of individual costs [3]– [13]. In Part I [2], we considered multitask inference problems where each agent in the network seeks to minimize an individual cost expressed as the expectation of some loss function. In order to exploit the smoothness prior, we formulated the inference problem as the minimization of the aggregate sum of individual costs regularized by a term promoting smoothness, known as the graph-Laplacian regularizer [28], [29].

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.