Abstract

Sparse approximation addresses the problem of approximately fitting a linear model with a solution having as few non-zero components as possible. While most sparse estimation algorithms rely on suboptimal formulations, this work studies the performance of exact optimization of $\ell_0$ -norm-based problems through Mixed-Integer Programs (MIPs). Nine different sparse optimization problems are formulated based on $\ell_1,$ $\ell_2$ or $\ell_\infty$ data misfit measures, and involving whether constrained or penalized formulations. For each problem, MIP reformulations allow exact optimization, with optimality proof, for moderate-size yet difficult sparse estimation problems. Algorithmic efficiency of all formulations is evaluated on sparse deconvolution problems. This study promotes error-constrained minimization of the $\ell_0$ norm as the most efficient choice when associated with $\ell_1$ and $\ell_\infty$ misfits, while the $\ell_2$ misfit is more efficiently optimized with sparsity-constrained and sparsity-penalized problems. Exact $\ell_0$ -norm optimization is shown to outperform classical methods in terms of solution quality, both for over- and underdetermined problems. Numerical simulations emphasize the relevance of the different $\ell_p$ fitting possibilities as a function of the noise statistical distribution. Such exact approaches are shown to be an efficient alternative, in moderate dimension, to classical (suboptimal) sparse approximation algorithms with $\ell_2$ data misfit. They also provide an algorithmic solution to less common sparse optimization problems based on $\ell_1$ and $\ell_\infty$ misfits. For each formulation, simulated test problems are proposed where optima have been successfully computed. Data and optimal solutions are made available as potential benchmarks for evaluating other sparse approximation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call