This paper considers the decentralized stochastic optimization problems where each node of network has only access to the local large data samples and local functions, which are distributed to the computational nodes. We extend the centralized fast adaptive gradient method with inexact model to deal with the large scale problem in the decentralized manner. Moreover, we propose an accelerated decentralized stochastic optimization algorithm with reconstructing parameter equations and defining new approximate local functions. Further, we provide the convergence analysis of the proposed algorithm and illustrate that our algorithm can achieve both the optimal stochastic oracle complexity and communication complexity that depend on the global condition number. Finally, the numerical experiments validate the convergence results of the proposed algorithm.
Read full abstract