Abstract

We analyze the convergence rate of the multiplicative gradient (MG) method for PET-type problems with m component functions and an n-dimensional optimization variable. We show that the MG method has an O(ln⁡(n)/t) convergence rate, in both the ergodic and the non-ergodic senses. Furthermore, we show that the distances from the iterates to the set of optimal solutions converge (to zero) at rate O(1/t). Our results show that, in the regime n=O(exp⁡(m)), to find an ε-optimal solution of the PET-type problems, the MG method has a lower computational complexity compared with the relatively-smooth gradient method and the Frank-Wolfe method for convex composite optimization involving a logarithmically-homogeneous barrier.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call