Abstract

Adaptive online optimization algorithms, such as Adam, RMSprop, and AdaBound, have recently been tremendously popular as they have been widely applied to address the issues in the field of deep learning. Despite their prevalence and prosperity, however, it is rare to investigate the distributed versions of these adaptive online algorithms. To fill the gap, a distributed online adaptive subgradient learning algorithm over time-varying networks, called DAdaxBound, which exponentially accumulates long-term past gradient information and possesses dynamic bounds of learning rates under learning rate clipping is developed. Then, the dynamic regret bound of DAdaxBound on convex and potentially nonsmooth objective functions is theoretically analysed. Finally, numerical experiments are carried out to assess the effectiveness of DAdaxBound on different datasets. The experimental results demonstrate that DAdaxBound compares favourably to other competing distributed online optimization algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call