Abstract

In Part I of this presentation, we have formulated single and multitask quadratic optimization problems, where agents are subject to quadratic, smoothing constraints over a graph. We have focused particularly on single task designs, whereby node uncertainties and their strength relative to un-regularized cost are tackled altogether by means of an adaptive penalty function. In this sequel, we readdress the multitask problem and propose new distributed implementations for their corresponding exact <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">leaky</i> -RLS solutions. We motivate a network formulation from a standalone viewpoint by capitalizing on the fact that 1) for regressors having uncorrelated entries, the performance of an efficient <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">${\mathcal O}(M^2)$</tex-math></inline-formula> conjugate-gradient (CG) realization of the leaky LS solution is identical to the one of an RLS filter; 2) a CG implementation does not require inversion of the underlying sample covariance matrix. Simple arguments yield an extended network-CG algorithm that relies on node-level recursions employing distinct step-sizes. Unlike the exponentially-weighted RLS algorithm, which tapers off regularization over time, a persistent penalty strength conforms with the very purpose of the equivalent network trust-region problem, while granting a well-conditioned solution. The approach further yields another family of single-task algorithms in terms of network linearly constrained solutions, which can be contrasted to the ones proposed in Part I. In particular, the exact linearly-constrained network LMS implementation proposed here outperforms the adaptive relative-variance NLMS, under much lower computational requirements.

Highlights

  • R EGULARIZATION plays a central part in general parameter estimation, serving a variety of purposes in the realm of adaptation and learning

  • A quadratic penalty function applied to a least-squares (LS) cost leads to a leaky type solution, whose form deviates from the un-regularized one in proportion to the penalty strength, say, η

  • The standard WRLS algorithm circumvents this problem by allowing the ridge factor to be of a special time-varying form, i.e., ηi “ λi1, in terms of a forgetting factor λ

Read more

Summary

INTRODUCTION

R EGULARIZATION plays a central part in general parameter estimation, serving a variety of purposes in the realm of adaptation and learning. Regularization conveys a priori information on the unknown parameters, in a way that the optimization task be restricted solely to a space of meaningful solutions [1] Due to their mathematical tractability, quadratically regularizers to quadratic costs have been vastly studied within the adaptive filtering community, both in the stochastic and deterministic settings [1]. In the stochastic sense, a gradient-descent approach leads to the so-called leaky-LMS algorithm [3], while in the deterministic scenario, the regularized solution can be expressed via an analogous, exponentially-weighted leaky-RLS (LWRLS) algorithm [4],[5] These formulations come, with a few well known drawbacks.

Related Work
Main Results
MOTIVATION
Adaptive leaky-RLS
Decoupled Recursions
DISTRIBUTED MULTITASK FORMULATIONS
SINGLE-TASK SOLUTIONS
Constrained CG-LMS
COMPUTATIONAL COMPLEXITY
SIMULATIONS
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call