Abstract
The principle of majorization-minimization (MM) provides a general framework for eliciting effective algorithms to solve optimization problems. However, the resulting methods often suffer from slow convergence, especially in large-scale and high-dimensional data settings. This has motivated several acceleration schemes tailored for MM algorithms, but many existing approaches are either problem-specific, or rely on approximations and heuristics loosely inspired by the optimization literature. We propose a novel quasi-Newton method for accelerating any valid MM algorithm, cast as seeking a fixed point of the MM algorithm map. The method does not require specific information or computation from the objective function or its gradient, and enjoys a limited-memory variant amenable to efficient computation in high-dimensional settings. By rigorously connecting our approach to Broyden’s classical root-finding methods, we establish convergence guarantees and identify conditions for linear and super-linear convergence. These results are validated numerically and compared to peer methods in a thorough empirical study, showing that it achieves state-of-the-art performance across a diverse range of problems. Supplementary materials for this article are available online.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.