Abstract

Big data analysis has become a crucial part of new emerging technologies such as the internet of things, cyber-physical analysis, deep learning, anomaly detection, etc. Among many other techniques, dimensionality reduction plays a key role in such analyses and facilitates feature selection and feature extraction. Randomized algorithms are efficient tools for handling big data tensors. They accelerate decomposing large-scale data tensors by reducing the computational complexity of deterministic algorithms and the communication among different levels of memory hierarchy, which is the main bottleneck in modern computing environments and architectures. In this article, we review recent advances in randomization for computation of Tucker decomposition and Higher Order SVD (HOSVD). We discuss random projection and sampling approaches, single-pass and multi-pass randomized algorithms and how to utilize them in the computation of the Tucker decomposition and the HOSVD. Simulations on synthetic and real datasets are provided to compare the performance of some of best and most promising algorithms.

Highlights

  • Real world data often are multidimensional and naturally are represented as tensors or multidimensional arrays

  • The results show that the R-Sequentially Truncated HOSVD (STHOSVD) algorithm (Algorithm 8) and the randomized sampling Tucker (R-ST) algorithm (Algorithm 10) are the fastest algorithms for computation of the Higher Order SVD (HOSVD) or the Tucker decomposition

  • The results show that the R-STHOSVD and the Randomized Sampling Tucker Approximation (R-ST) algorithms are the most efficient randomized algorithms for the low multilinear rank approximation of the data tensor (37)

Read more

Summary

INTRODUCTION

Real world data often are multidimensional and naturally are represented as tensors or multidimensional (multiway) arrays. It has been proved that randomized algorithms can tackle this difficulty by exploiting only a part of the data tensors with applications in tensor regression [17], tensor completion [18], [19] and deep learning [20] Because of this property, they scale quite well to the tensor dimensions and orders. In this article, we systematically review a variety of randomized algorithms for decomposing large-scale tensors in the Tucker and the HOSVD formats.

PRELIMINARY CONCEPTS AND NOTATIONS
RANDOM PROJECTION
SAMPLING TECHNIQUES
RANDOMIZED ALGORITHMS FOR SOLVING LEAST-SQUARES PROBLEMS
TUCKER DECOMPOSITION AND HIGHER ORDER
RANDOMIZED TUCKER DECOMPOSITION AND RANDOMIZED HOSVD
RANDOMIZED RANDOM PROJECTION TUCKER
Truncate the core tensor S and the factor matrices
RANDOMIZED COUNT-SKETCH TUCKER DECOMPOSITION
DISCUSSION
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.