Abstract

This survey gives a comprehensive overview of tensor techniques and applications in machine learning. Tensor represents higher order statistics. Nowadays, many applications based on machine learning algorithms require a large amount of structured high-dimensional input data. As the set of data increases, the complexity of these algorithms increases exponentially with the increase of vector size. Some scientists found that using tensors instead of the original input vectors can effectively solve these high-dimensional problems. This survey introduces the basic knowledge of tensor, including tensor operations, tensor decomposition, some tensor-based algorithms, and some applications of tensor in machine learning and deep learning for those who are interested in learning tensors. The tensor decomposition is highlighted because it can effectively extract structural features of data and many algorithms and applications are based on tensor decomposition. The organizational framework of this paper is as follows. In part one, we introduce some tensor basic operations, including tensor decomposition. In part two, applications of tensor in machine learning and deep learning, including regression, supervised classification, data preprocessing, and unsupervised classification based on low rank tensor approximation algorithms are introduced detailly. Finally, we briefly discuss urgent challenges, opportunities and prospects for tensor.

Highlights

  • Because of its structured representation of data format and ability to reduce the complexity of multidimensional arrays, tensor has been gradually applied in various fields, such as Dictionary Learning (Ghassemi et al.) [88], Magnetic Resonance Imaging(MRI) (Xu et al.) [148], Spectral data classification (Makantasis et al.) [69], and Image deblurring (Geng et al.) [75]

  • The main purpose of this survey is to introduce basic machine learning applications related to tensor decomposition and tensor network model

  • PART TWO: TENSOR APPLICATION IN MACHINE LEARNING AND DEEP LEARNING The second part is based on the first part of tensor operation and tensor decomposition

Read more

Summary

PART ONE

A third-order tensor looks like a cuboid (see figure 2). We use underlined uppercase letters to indicate tensors, that is, Y ∈ RI1×I2×I3×···IN , to represent an Nth-order tensor. We can use this symbol to represent 2nd-order tensor (matrix) and 1st-order tensor (vector). We use separate symbols to represent 2nd-order tensor (matrix) and 1st-order tensor (vector) respectively. We use Y ∈ RI×J to represent matrix, y ∈ RI for vector, y ∈ R0 for scalar. We use yi1,i2,··· ,iN to represent the entries of an Nth-order tensor Y ∈ RI1×I2×I3×···IN. Tensor fiber(see figure 7) is a vector equivalent to fixing two tensor indices, and tensor slice(see figure 8) is a matrix equivalent to fixing one indices. This is a 3rd-order tensor C ∈ R2×2×2. If we fix the second and the third indices, we can get another four vectors C(:, 1, 1), C(:, 1, 2), C(:, 2, 1), C(:, 2, 2): [ 1 5 ], [ 2 6 ], [ 3 7 ], [ 4 8 ]

TENSOR OPERATION
TENSOR DECOMPOSITION
The inner product of the two tensors:
THE NATURE AND ALGORITHM OF TT DECOMPOSITION
PART TWO
How to optimize the learning algorithm or avoid the saddle point?
Is it possible to improve low-rank tensor decomposition algorithms?
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call