Abstract

Learning a good initialization model from distributed data sources over multi-agent systems is highly promising, in which the tasks or data are distributed stored and not accessible to all agents. This paper focuses on the distributed meta-learning problem and proposes a distributed Reptile meta-learning algorithm. In the proposed algorithm, each agent approximates the global model through a bi-level optimization scheme, where the inner step employs a stochastic gradient descent on a specific task, and the outer step utilizes information from neighbors and a gradient-like updating. The suggested algorithm avoids calculating the Hessian-vector products during training, reducing the computational complexity and affording memory miniaturization. We further analyze the convergence properties of the proposed algorithm under the convexity assumption. Finally, we demonstrate the effectiveness of the proposed algorithm on regression and classification tasks. The results show that our algorithm approximates a centralized solution and outperforms the non-cooperative algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call