Abstract

AbstractMeasuring the error by anℓ1{\ell^{1}}-norm, we analyze under sparsity assumptions anℓ0{\ell^{0}}-regularization approach, where the penalty in the Tikhonov functional is complemented by a general stabilizing convex functional. In this context, ill-posed operator equationsA⁢x=y{Ax=y}with an injective and bounded linear operatorAmapping betweenℓ2{\ell^{2}}and a Banach spaceYare regularized. For sparse solutions, error estimates as well as linear and sublinear convergence rates are derived based on a variational inequality approach, where the regularization parameter can be chosen either a priori in an appropriate way or a posteriori by the sequential discrepancy principle. To further illustrate the balance between theℓ0{\ell^{0}}-term and the complementing convex penalty, the important special case of theℓ2{\ell^{2}}-norm square penalty is investigated showing explicit dependence between both terms. Finally, some numerical experiments verify and illustrate the sparsity promoting properties of corresponding regularized solutions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call