Abstract
We consider a large-scale convex program with functional constraints, where interior point methods are intractable due to the problem size. The effective solution techniques for these problems permit only simple operations at each iteration, and thus are based on primal–dual first-order methods such as the Arrow–Hurwicz–Uzawa subgradient method, which utilize only gradient computations and projections at each iteration. Such primal–dual algorithms admit the interpretation of solving the associated saddle point problem arising from the Lagrange dual. We revisit these methods through the lens of regret minimization from online learning and present a flexible framework. While it is well known that two regret-minimizing algorithms can be used to solve a convex–concave saddle point problem at the standard rate of $O$ ( $1/\sqrt {T}$ ), our framework for primal–dual algorithms allows us to exploit structural properties such as smoothness and/or strong convexity and achieve better convergence rates in favorable cases. In particular, for non-smooth problems with strongly convex objectives, our primal–dual framework equipped with an appropriate modification of Nesterov’s dual averaging algorithm achieves $O$ ( $1/T$ ) convergence rate.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.