Regularization is a common tool in variational inverse problems to impose assumptions on the parameters of the problem. One such assumption is sparsity, which is commonly promoted using lasso and total variation-like regularization. Although the solutions to many such regularized inverse problems can be considered as points of maximum probability of well-chosen posterior distributions, samples from these distributions are generally not sparse. In this paper, we present a framework for implicitly defining a probability distribution that combines the effects of sparsity imposing regularization with Gaussian distributions. Unlike continuous distributions, these implicit distributions can assign positive probability to sparse vectors. We study these regularized distributions for various regularization functions including total variation regularization and piecewise linear convex functions. We apply the developed theory to uncertainty quantification for Bayesian linear inverse problems and derive a Gibbs sampler for a Bayesian hierarchical model. To illustrate the difference between our sparsity-inducing framework and continuous distributions, we apply our framework to small-scale deblurring and computed tomography examples.
Read full abstract