Abstract

This paper investigates the theoretical guarantees of <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\ell^{1}$</tex></formula> -analysis regularization when solving linear inverse problems. Most of previous works in the literature have mainly focused on the sparse synthesis prior where the sparsity is measured as the <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex Notation="TeX">$\ell^{1}$</tex></formula> norm of the coefficients that synthesize the signal from a given dictionary. In contrast, the more general analysis regularization minimizes the <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\ell^{1}$</tex></formula> norm of the correlations between the signal and the atoms in the dictionary, where these correlations define the analysis support. The corresponding variational problem encompasses several well-known regularizations such as the discrete total variation and the fused Lasso. Our main contributions consist in deriving sufficient conditions that guarantee exact or partial analysis support recovery of the true signal in presence of noise. More precisely, we give a sufficient condition to ensure that a signal is the unique solution of the <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\ell^{1}$</tex> </formula> -analysis regularization in the noiseless case. The same condition also guarantees exact analysis support recovery and <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\ell^{2}$</tex></formula> -robustness of the <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\ell^{1}$</tex></formula> -analysis minimizer vis-à-vis an enough small noise in the measurements. This condition turns to be sharp for the robustness of the sign pattern. To show partial support recovery and <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\ell^{2}$</tex> </formula> -robustness to an arbitrary bounded noise, we introduce a stronger sufficient condition. When specialized to the <formula formulatype="inline" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex Notation="TeX">$\ell^{1}$</tex></formula> -synthesis regularization, our results recover some corresponding recovery and robustness guarantees previously known in the literature. From this perspective, our work is a generalization of these results. We finally illustrate these theoretical findings on several examples to study the robustness of the 1-D total variation, shift-invariant Haar dictionary, and fused Lasso regularizations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.