Abstract Bayesian hierarchical models can provide efficient algorithms for finding sparse solutions to ill-posed linear inverse problems. The models typically comprise a conditionally Gaussian prior model for the unknown augmented by a generalized gamma hyper-prior model for the variance hyper-parameters. This investigation generalizes such models and their efficient maximum a posterior estimation using the iterative alternating sequential algorithm in two ways: (1) general sparsifying transforms: diverging from conventional methods, our approach permits use of sparsifying transformations with nontrivial kernels; (2) unknown noise variances: the noise variance is treated as a random variable to be estimated during the inference procedure. This is important in applications where the noise estimate cannot be accurately estimated a priori. Remarkably, these augmentations neither significantly burden the computational expense of the algorithm nor compromise its efficacy. We include convexity and convergence analysis and demonstrate our method’s efficacy in several numerical experiments.
Read full abstract