Abstract

We study the convergence rate of Bregman gradient methods for convex optimization in the space of measures on a d-dimensional manifold. Under basic regularity assumptions, we show that the suboptimality gap at iteration k is in O(log(k)k -1 ) for multiplicative updates, while it is in O(k -q/(d+q) ) for additive updates for some q∈{1,2,4} determined by the structure of the objective function. Our flexible proof strategy, based on approximation arguments, allows us to painlessly cover all Bregman Proximal Gradient Methods (PGM) and their acceleration (APGM) under various geometries such as the hyperbolic entropy and L p divergences. We also prove the tightness of our analysis with matching lower bounds and confirm the theoretical results with numerical experiments on low dimensional problems. Note that all these optimization methods must additionally pay the computational cost of discretization, which can be exponential in d.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call