Abstract

Best sparse tensor rank-1 approximation consists of finding a projection of a given data tensor onto the set of sparse rank-1 tensors, which is important in sparse tensor decomposition and related problems. Existing models used ℓ0 or ℓ1 norms to pursue sparsity. In this work, we first construct a truncated exponential induced regularizer to encourage sparsity, and prove that this regularizer admits a reweighted property. Lower bounds for nonzero entries and upper bounds for the number of nonzero entries of the stationary points of the associated optimization problem are studied. By using the reweighted property of the regularizer, we develop an iteratively reweighted algorithm for solving the problem, and establish its convergence to a stationary point without any assumption. In particular, we show that if the parameter of the regularizer is small enough, then the support of the iterative points will be fixed after finitely many steps. Numerical experiments illustrate the effectiveness of the proposed model and algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call