We study the minimization problem of a non-convex sparsity promoting penalty function, the transformed $$l_1$$ (TL1), and its application in compressed sensing (CS). The TL1 penalty interpolates $$l_0$$ and $$l_1$$ norms through a nonnegative parameter $$a \in (0,+\infty )$$ , similar to $$l_p$$ with $$p \in (0,1]$$ , and is known to satisfy unbiasedness, sparsity and Lipschitz continuity properties. We first consider the constrained minimization problem, and discuss the exact recovery of $$l_0$$ norm minimal solution based on the null space property (NSP). We then prove the stable recovery of $$l_0$$ norm minimal solution if the sensing matrix A satisfies a restricted isometry property (RIP). We formulated a normalized problem to overcome the lack of scaling property of the TL1 penalty function. For a general sensing matrix A, we show that the support set of a local minimizer corresponds to linearly independent columns of A. Next, we present difference of convex algorithms for TL1 (DCATL1) in computing TL1-regularized constrained and unconstrained problems in CS. The DCATL1 algorithm involves outer and inner loops of iterations, one time matrix inversion, repeated shrinkage operations and matrix-vector multiplications. The inner loop concerns an $$l_1$$ minimization problem on which we employ the Alternating Direction Method of Multipliers. For the unconstrained problem, we prove convergence of DCATL1 to a stationary point satisfying the first order optimality condition. In numerical experiments, we identify the optimal value $$a=1$$ , and compare DCATL1 with other CS algorithms on two classes of sensing matrices: Gaussian random matrices and over-sampled discrete cosine transform matrices (DCT). Among existing algorithms, the iterated reweighted least squares method based on $$l_{1/2}$$ norm is the best in sparse recovery for Gaussian matrices, and the DCA algorithm based on $$l_1$$ minus $$l_2$$ penalty is the best for over-sampled DCT matrices. We find that for both classes of sensing matrices, the performance of DCATL1 algorithm (initiated with $$l_1$$ minimization) always ranks near the top (if not the top), and is the most robust choice insensitive to the conditioning of the sensing matrix A. DCATL1 is also competitive in comparison with DCA on other non-convex penalty functions commonly used in statistics with two hyperparameters.
Read full abstract