Abstract

Tensor-ring (TR) decomposition is a powerful tool for exploiting the low-rank property of multiway data and has been demonstrated great potential in a variety of important applications. In this article, non-negative TR (NTR) decomposition and graph-regularized NTR (GNTR) decomposition are proposed. The former equips TR decomposition with the ability to learn the parts-based representation by imposing non-negativity on the core tensors, and the latter additionally introduces a graph regularization to the NTR model to capture manifold geometry information from tensor data. Both of the proposed models extend TR decomposition and can be served as powerful representation learning tools for non-negative multiway data. The optimization algorithms based on an accelerated proximal gradient are derived for NTR and GNTR. We also empirically justified that the proposed methods can provide more interpretable and physically meaningful representations. For example, they are able to extract parts-based components with meaningful color and line patterns from objects. Extensive experimental results demonstrated that the proposed methods have better performance than state-of-the-art tensor-based methods in clustering and classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call