When and where was the study conducted: This work was done in 2018, 2019 and 2020 when Palma London was a PhD student at Caltech and Shai Vardi was a postdoc at Caltech. This work was also done in part while Palma London was visiting Purdue University, and while Reza Eghbali was a postdoctoral fellow the Simons Institute for the Theory of Computing. Adam Wierman is a professor at Caltech. Article Summary and Talking Points: Please describe the primary purpose/findings of your article in 3 sentences or less. This paper presents a framework for accelerating (speeding up) existing convex program solvers. Across engineering disciplines, a fundamental bottleneck is the availability of fast, efficient, accurate solvers. We present an acceleration method that speeds up linear programing solvers such as Gurobi and convex program solvers such as the Splitting Conic Solver by two orders of magnitude. Please include 3-5 short bullet points of “Need to Know” items regarding this research and your findings. - Optimizations problems arise in many engineering and science disciplines, and developing efficient optimization solvers is key to future innovation. - We speed up linear programing solver Gurobi by two orders of magnitude. - This work applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines. Please identify 2 pull quotes from your article that best capture the novelty and impact of your research. “We propose a framework for accelerating exact and approximate convex programming solvers for packing linear programming problems and a family of convex programming problems with linear constraints. Analytically, we provide worst-case guarantees on the run time and the quality of the solution produced. Numerically, we demonstrate that our framework speeds up Gurobi and the Splitting Conic Solver by two orders of magnitude, while maintaining a near-optimal solution.” “Our focus in this paper is on a class of packing problems for which data is either very costly or hard to obtain. In these situations, the number of data points available is much smaller than the number of variables. In a machine-learning setting, this regime is increasingly prevalent because it is often advantageous to consider larger and larger feature spaces, while not necessarily obtaining proportionally more data.” Article Implications - Please describe in 5 sentences or less the innovative takeaway(s) of your research. This framework applies to optimization problems with monotone objective functions and packing constraints, which is a common problem formulation across many disciplines, including machine learning, inference, and resource allocation. Providing fast solvers for these problems is crucial. We exploit characteristics of the problem structure and leverage statistical properties of the problem constraints to allow us to speed up optimization solvers. We present worst-case guarantees on run-time, and empirically demonstrate speedups of two orders of magnitude. - Please describe in 5 sentences or less why your findings would be of interest to the general public. Many problems in engineering, science, math, and machine learning involve solving an optimization problem. Fast, efficient optimization solvers are key to future innovation in science and engineering. This work presents a tool to accelerate existing convex solvers, and thus can also be applied to future solvers. As the size of datasets grow it is even more crucial to have fast solvers. - Who would be the most impacted by your research (i.e. by industry, job title, consumer category). Our work impacts machine-learning researchers and optimization researchers, in industry or academia.
Read full abstract