Abstract

In tensor completion, the latent nuclear norm is commonly used to induce low-rank structure, while substantially failing to capture the global information due to the utilization of unbalanced unfolding scheme. To overcome this drawback, a new latent nuclear norm equipped with a more balanced unfolding scheme is defined for low-rank regularizer. Moreover, the new latent nuclear norm together with the Frank-Wolfe (FW) algorithm is developed as an efficient completion method by utilizing the sparsity structure of observed tensor. Specifically, both FW linear subproblem and line search only need to access the observed entries, by which we can instead maintain the sparse tensors and a set of small basis matrices during iteration. Most operations are based on sparse tensors, and the closed-form solution of FW linear subproblem can be obtained from rank-one SVD. We theoretically analyze the space-complexity and time-complexity of the proposed method, and show that it is much more efficient over other norm-based completion methods for higher-order tensors. Extensive experimental results of visual-data inpainting demonstrate that the proposed method is able to achieve state-of-the-art performance at smaller costs of time and space, which is very meaningful for the memory-limited equipment in practical applications.

Highlights

  • In the past decades, tensor completion has aroused increasing attention due to its wide applications in a variety of fields, such as computer vision [1]–[11], multi-relational link prediction [12]–[14], and recommendation system [15]–[18]

  • CANDECOMP/PARAFAC (CP) decomposition [19] and Tucker decomposition [20] are the two most studied and popular models applied in tensor completion

  • Extensive experimental results of visual-data inpainting confirm that the proposed method is able to achieve state-of-the-art performance at smaller costs of time and space, which is very meaningful for the memory-limited equipment in practical applications

Read more

Summary

Introduction

Tensor completion has aroused increasing attention due to its wide applications in a variety of fields, such as computer vision [1]–[11], multi-relational link prediction [12]–[14], and recommendation system [15]–[18]. The goal of tensor completion is to recover an incomplete tensor from partially observed entries, and the most existing methods try to achieve it via the low-rank structure assumption. These tensor completion methods can mainly be categorized into tensor decomposition based method and rank-minimization based method. Tensor decomposition based method aims to decompose the incompleted tensor into a sequence of low-rank factors and predict the missing entries via the latent factors. CANDECOMP/PARAFAC (CP) decomposition [19] and Tucker decomposition [20] are the two most studied and popular models applied in tensor completion.

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.