Abstract

Dimensionality reduction is a critical step in the learning process that plays an essential role in various applications. The most popular methods for dimensionality reduction, SVD and PCA, for instance, only work on one-dimensional data. This means that for higher-order data like matrices or more generally tensors, data should be fold to the vector format. Thus, this approach ignores the spatial relationships of features and increases the probability of overfitting as well. Due to the mentioned issues, several methods like Generalized Low-Rank Approximation of Matrices (GLRAM) and Multilinear PCA (MPCA) proposed to deal with multi-dimensional data in their original format. Consequently, the spatial relationships of features preserved and the probability of overfitting diminished. Besides, the time and space complexity in such methods are less than vector-based ones. However, since the multilinear approach needs fewer parameters, its search space is much smaller than that of the vector-based one. To solve the previous problems of multilinear methods like GLRAM, we proposed a novel extension of GLRAM in which instead one transformation pair use multiple left and right transformation pairs on the projected data. Consequently, this provides the problem with a larger feasible region and smaller reconstruction error. This article provides several analytical discussions and experimental results that confirm the quality of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call