To efficiently express tensor data using the Tucker format, a critical task is to minimize the multilinear rank such that the model would not be over-flexible and lead to overfitting. Due to the lack of rank minimization tools in tensor, existing works connect Tucker multilinear rank minimization to trace norm minimization of matrices unfolded from the tensor data. While these formulations try to exploit the common aim of identifying the low-dimensional structure of the tensor and matrix, this paper reveals that existing trace norm-based formulations in Tucker completion are inefficient in multilinear rank minimization. We further propose a new interpretation of Tucker format such that trace norm minimization is applied to the factor matrices of the equivalent representation, rather than some matrices unfolded from tensor data. Based on the newly established problem formulation, a fixed point iteration algorithm is proposed, and its convergence is proved. Numerical results are presented to show that the proposed algorithm exhibits significant improved performance in terms of multilinear rank learning and consequently tensor signal recovery accuracy, compared to existing trace norm based Tucker completion methods.
Read full abstract