Abstract
Best sparse tensor rank-1 approximation consists of finding a projection of a given data tensor onto the set of sparse rank-1 tensors, which is important in sparse tensor decomposition and related problems. Existing models used ℓ0 or ℓ1 norms to pursue sparsity. In this work, we first construct a truncated exponential induced regularizer to encourage sparsity, and prove that this regularizer admits a reweighted property. Lower bounds for nonzero entries and upper bounds for the number of nonzero entries of the stationary points of the associated optimization problem are studied. By using the reweighted property of the regularizer, we develop an iteratively reweighted algorithm for solving the problem, and establish its convergence to a stationary point without any assumption. In particular, we show that if the parameter of the regularizer is small enough, then the support of the iterative points will be fixed after finitely many steps. Numerical experiments illustrate the effectiveness of the proposed model and algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.