Abstract

Cross Tensor Approximation (CTA) is a generalization of Cross/skeleton matrix and CUR Matrix Approximation (CMA) and is a suitable tool for fast low-rank tensor approximation. It facilitates interpreting the underlying data tensors and decomposing/compressing tensors so that their structures, such as nonnegativity, smoothness, or sparsity, can be potentially preserved. This paper reviews and extends state-of-the-art deterministic and randomized algorithms for CTA with intuitive graphical illustrations. We discuss several possible generalizations of the CMA to tensors, including CTAs: based on fiber selection, slice-tube selection, and lateral-horizontal slice selection. The main focus is on the CTA algorithms using Tucker and tubal SVD (t-SVD) models while we provide references to other decompositions such as Tensor Train (TT), Hierarchical Tucker (HT), and Canonical Polyadic (CP) decompositions. We evaluate the performance of the CTA algorithms by extensive computer simulations to compress color and medical images and compare their performance.

Highlights

  • Tensor decompositions are efficient and widely used tools for multi-way data processing and, in particular, they can be utilized to compress the data tensors without destroy13 ing their intrinsic multidimensional structure

  • We evaluate the performance of Cross Tensor Approximation (CTA) algorithms by extensive computer simulations to compress color and medical images and compare their performance

  • [9], Hierarchical Tucker (HT) decomposition [10], [11], Tensor Train/Tensor Chain (TT-TC) decomposition [12]–[14], tubal SVD (t-SVD) [15]–[17], each of which generalizes the notion of matrix rank to tensors in an efficient way

Read more

Summary

INTRODUCTION

Tensor decompositions are efficient and widely used tools for multi-way data processing (analysis) and, in particular, they can be utilized to compress the data tensors without destroy ing their intrinsic multidimensional structure. If the columns or rows are selected based on random sampling 288 is exact if rank (X) ≤ min {R1 , R2 } [67] This is techniques, this framework is often known as randomized 289 of less practical interest because we need to access the whole data matrix X. Quite similar to sampled is called matrix column selection, interpolative the CMA in which a part of columns and rows of a given matrix decompositions, and in some context, it is referred matrix are sampled, here a set of fibers This problem is known as column modes) are selected, and the goal is computing a Tucker (feature) selection in the field of machine learning and data approximation based on these sampled fibers. This is considered as a generalization of Sample Rn fibers in n-th mode and generate the CX matrix approximation to tensors

LOW-RANK APPROXIMATIONS BASED ON
RANDOMIZED HIGHER-ORDER INTERPOLATORY
CTA BASED ON SLICE-FIBER SELECTION
Finally the IFFT operator is applied to the tensors Q
NUMBER OF PARAMETERS AND COMPUTATIONAL
APPLICATIONS OF CTA MODELS
ROBUST PCA
TENSOR RANK SELECTION METHODS
SIMULATIONS
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.