Non-convex relaxation methods have been widely used in tensor recovery problems, compared with convex relaxation methods, and can achieve better recovery results. In this paper, a new non-convex function, Minimax Logarithmic Concave Penalty (MLCP) function, is proposed, and some of its intrinsic properties are analyzed, among which it is interesting to find that the Logarithmic function is an upper bound of the MLCP function. The proposed function is generalized to tensor cases, yielding tensor MLCP and weighted tensor Lγ -norm. Consider that its explicit solution cannot be obtained when applying it directly to the tensor recovery problem. Therefore, the corresponding equivalence theorems to solve the such problem are given, namely, tensor equivalent MLCP theorem and equivalent weighted tensor Lγ -norm theorem. In addition, we propose two EMLCP-based models for classic tensor recovery problems, namely low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA), and design proximal alternate linearization minimization (PALM) algorithms to solve them individually. Furthermore, based on the Kurdyka-Åasiwicz property, it is proved that the solution sequence of the proposed algorithm has a finite length and converges to the critical point globally. Finally, extensive experiments show that the proposed algorithm achieves good results, and it is confirmed that the MLCP function is indeed better than the Logarithmic function in the minimization problem, which is consistent with the analysis of theoretical properties.
Read full abstract