Abstract

Though quasi-Newton methods have been extensively studied in the literature, they either suffer from local convergence or use a series of line searches for global convergence which is not easy to implement in the distributed setting. In this work, we first propose a line search free greedy quasi-Newton (GQN) method with adaptive steps and establish explicit non-asymptotic bounds for both the global convergence rate and local superlinear rate. Our novel idea lies in the design of multiple GQN updates, which only involves computing Hessian-vector products, to control the Hessian approximation error, and a simple mechanism to adjust stepsizes to ensure the objective function improvement per iterate. Then, we extend it to the master–worker framework and propose a distributed adaptive GQN method whose communication cost is comparable with that of first-order methods, yet it retains the superb convergence property of its centralized counterpart. Finally, we demonstrate the advantages of our methods via numerical experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call