Abstract
We consider randomized block coordinate stochastic mirror descent (RBSMD) methods for solving high-dimensional stochastic optimization problems with strongly convex objective functions. Our goal is to develop RBSMD schemes that achieve a rate of convergence with a minimum constant factor with respect to the choice of the stepsize sequence. To this end, we consider both subgradient and gradient RBSMD methods addressing nonsmooth and smooth problems, respectively. For each scheme, (i) we develop self-tuned stepsize rules characterized in terms of problem parameters and algorithm settings; (ii) we show that the non-averaging iterate generated by the underlying RBSMD method converges to the optimal solution both in an almost sure and a mean sense; (iii) we show that the mean squared error is minimized. When problem parameters are unknown, we develop a unifying self-tuned update rule that can be applied in both subgradient and gradient SMD methods, and show that for any arbitrary and small enough initial stepsize, a suitably defined error bound is minimized. We provide constant factor comparisons with standard SMD and RBSMD methods. Our numerical experiments performed on an SVM model display that the self-tuned schemes are significantly robust with respect to the choice of problem parameters, and the initial stepsize.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.