Abstract

Adaptive online optimization methods such as ADAGRAD, RMSPROP and ADAM are widely used to solve large-scale machine learning problems. In the existed work, many solutions have been proposed for the communication parallelization between peripheral nodes and central nodes, which easily cause high communication costs in practice. Moreover, the existed methods generally have poor generalization ability, and even cannot ensure convergence due to unstable and extreme learning rates. To tackle this issue, a new distributed adaptive moment estimation method with the dynamic bound of learning rate (DADABOUND) is developed for online optimization on decentralized networks. This method applies the dynamic bound of the learning rate to decentralized optimization, which avoids the extreme learning rates and excessive overload on the central nodes. We also theoretically analyze the convergence properties of the proposed algorithm. Finally, experiments are carried out on various tasks and the results show that DADABOUND works also well in practice and compares favorably to competing other optimization methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.