Abstract

The papers in this special section focus on tensor decomposition for signal processing and machine learning. Tensor decomposition, also called tensor factorization, is useful for representing and analyzing multi-dimensional data. Tensor decompositions have been applied in signal processing applications (speech, acoustics, communications, radar, biomedicine), machine learning (clustering, dimensionality reduction, latent factor models, subspace learning), and well beyond. These tools aid in learning a variety of models, including community models, probabilistic context-free-grammars, Gaussian mixture model, and two-layer neural networks. Although considerable research has been carried out in this area, there are many challenges still outstanding that need to be explored and addressed; some examples being tensor deflation, massive tensor decompositions, and high computational cost of algorithms. The multi-dimensional nature of the signals and even “bigger” data, particularly in next-generation advanced information and communication technology systems, provide good opportunities to exploit tensor-based models and tensor networks, with the aim of meeting the strong requirements on system flexibility, convergence, and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call