The principle of majorization-minimization (MM) provides a general framework for eliciting effective algorithms to solve optimization problems. However, the resulting methods often suffer from slow convergence, especially in large-scale and high-dimensional data settings. This has motivated several acceleration schemes tailored for MM algorithms, but many existing approaches are either problem-specific, or rely on approximations and heuristics loosely inspired by the optimization literature. We propose a novel quasi-Newton method for accelerating any valid MM algorithm, cast as seeking a fixed point of the MM algorithm map. The method does not require specific information or computation from the objective function or its gradient, and enjoys a limited-memory variant amenable to efficient computation in high-dimensional settings. By rigorously connecting our approach to Broyden’s classical root-finding methods, we establish convergence guarantees and identify conditions for linear and super-linear convergence. These results are validated numerically and compared to peer methods in a thorough empirical study, showing that it achieves state-of-the-art performance across a diverse range of problems. Supplementary materials for this article are available online.
Read full abstract