Abstract

Dimension reduction is analytical methods for reconstructing high-order tensors that the intrinsic rank of these tensor data is relatively much smaller than the dimension of the ambient measurement space. Typically, this is the case for most real world datasets in signals, images and machine learning. The CANDECOMP/PARAFAC (CP, aka Canonical Polyadic) tensor completion is a widely used approach to find a low-rank approximation for a given tensor. In the tensor model (Sanogo and Navasca in 2018 52nd Asilomar conference on signals, systems, and computers, pp 845–849, https://doi.org/10.1109/ACSSC.2018.8645405, 2018), a sparse regularization minimization problem via ell _1 norm was formulated with an appropriate choice of the regularization parameter. The choice of the regularization parameter is important in the approximation accuracy. Due to the emergence of the massive data, one is faced with an onerous computational burden for computing the regularization parameter via classical approaches (Gazzola and Sabaté Landman in GAMM-Mitteilungen 43:e202000017, 2020) such as the weighted generalized cross validation (WGCV) (Chung et al. in Electr Trans Numer Anal 28:2008, 2008), the unbiased predictive risk estimator (Stein in Ann Stat 9:1135–1151, 1981; Vogel in Computational methods for inverse problems, 2002), and the discrepancy principle (Morozov in Doklady Akademii Nauk, Russian Academy of Sciences, pp 510–512, 1966). In order to improve the efficiency of choosing the regularization parameter and leverage the accuracy of the CP tensor, we propose a new algorithm for tensor completion by embedding the flexible hybrid method (Gazzola in Flexible krylov methods for lp regularization) into the framework of the CP tensor. The main benefits of this method include incorporating the regularization automatically and efficiently as well as improving accuracy in the reconstruction and algorithmic robustness. Numerical examples from image reconstruction and model order reduction demonstrate the efficacy of the proposed algorithm.

Highlights

  • Tensor computations have become prevalent in across many fields in mathematics [16, 28], computer science [9, 22, 35], engineering [14] and data science [1, 27]

  • We present a more adaptive, practical and methodical way for calculating the regularization parameter λ using the flexible hybrid method, which is tailored for the use in the canonical CP tensor framework

  • We show that recent effort in tensor-based model reduction such as Randomized CP tensor decomposition [17] and tensor POD [56] have been rewarded with many promising developments leverage the computational effort for many-query computations and repeated output evaluations for different values of some inputs of interest where classical model order reduction approaches [38, 39] such as Reduced Basis Methods [8, 44] and Proper Orthogonal Decomposition (POD) faced with heavy computational burden

Read more

Summary

Introduction

Tensor computations have become prevalent in across many fields in mathematics [16, 28], computer science [9, 22, 35], engineering [14] and data science [1, 27]. Tensor based methods are gaining grounds in solving complex problems in scientific computing. The tensor rank problem is crucial in reconstructing a given tensor T. Are concatenated into what we call factor matrices, A, B and C. The elements of the vector α of size R are the scalings of the rank-one tensors. This tensor factorization is the well-known canonical polyadic or CANDECOMP/PARAFAC (CP) decomposition. Optimization techniques, namely the multiblock alternating methods are standard methods for finding the factor matrices of a given tensor and its rank

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call