Abstract

Existing tensor completion methods all require some hyperparameters. However, these hyperparameters determine the performance of each method, and it is difficult to tune them. In this paper, we propose a novel nonparametric tensor completion method, which formulates tensor completion as an unconstrained optimization problem and designs an efficient iterative method to solve it. In each iteration, we not only calculate the missing entries by the aid of data correlation, but consider the low-rank of tensor and the convergence speed of iteration. Our iteration is based on the gradient descent method, and approximates the gradient descent direction with tensor matricization and singular value decomposition. Considering the symmetry of every dimension of a tensor, the optimal unfolding direction in each iteration may be different. So we select the optimal unfolding direction by scaled latent nuclear norm in each iteration. Moreover, we design formula for the iteration step-size based on the nonconvex penalty. During the iterative process, we store the tensor in sparsity and adopt the power method to compute the maximum singular value quickly. The experiments of image inpainting and link prediction show that our method is competitive with six state-of-the-art methods.

Highlights

  • Real-world data are often sparse but rich in structures and can be stored in arrays

  • Tensor Completion (NTC) method based on gradient descent and nonconvex penalty

  • This paper proposes a new Nonparametric Tensor Completion (NTC) method based on gradient

Read more

Summary

Introduction

Real-world data are often sparse but rich in structures and can be stored in arrays. Tensors are. A common method is to decompose the matrix into two-factor matrices and use them to calculate the missing data [8,9]. Another method is to turn it into a Rank. Reference [16] proposed a dual frame for low-rank tensor completion via nuclear norm constraints. We use gradient descent to solve the optimization problem of tensor completion and build a gradient tensor with tensor matricizations and Singular Value Decomposition (SVD). Considering the symmetry of every dimension of a universal tensor, we select the optimal gradient tensor via scaled latent nuclear norm in each iteration.

Symbols and Formulas
Related Algorithms
Problem
Iterative Calculation Based on Gradient Descent
Proof of Iterative Convergence
Selection of the Unfolding Direction
Design of the Iteration Step-Size
Optimization of Calculation
Analysis of Time Complexity
Experiments
Performance Comparison
Methods
Effectiveness of Our Step-Size Design
Conclusions
Findings
Fast and accurate matrix completion via enhanced truncated tag nuclear norm
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.