Abstract

This paper describes a problem of distribution free learning theory wherein a collection of n agents make independent and identically distributed observations of an unknown function and seek consensus in their construction of an optimal estimate. The learning objective is expressed in terms of an error defined over a reproducing kernel Hilbert spaces (RKHS) and in terms of a multiplier that enforces the consensus constraint. The multiplier can be interpreted as the information shared by the agents to achieve their joint estimate. A two stage learning dynamic is introduced in which agents alternatively perform local updates based on available measurements and prior mutual information regarding estimates, and then calculate and exchange with other nodes certain information functionals of their estimates. It is shown that the learning problem can be expressed as an abstract saddle point problem over a pair of RKHS. Sufficient conditions for the well-posedness of the optimization problem are derived in the RKHS framework using Schur complement techniques. Probabilistic bounds on the rate of convergence are derived when a stochastic gradient technique is used for the local update and an inexact Uzawa algorithm is employed to define the information exchange. Practical implementation of the method requires an oracle that evaluates the norm of residuals appearing in the learning dynamic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call