Abstract

Least-mean-squares (LMS) algorithms constitute a prevalent approach to implement the linear adaptive filters whose coefficients can be updated sample by sample so as to track time-varying dynamics. As the memory and computational complexities required for the realization of LMS filters are very low, they have been widely adopted in many real-time signal processing applications. The input of any conventional LMS filter has to be a sequence of scalar samples (one-dimensional time series), whereas such assumption is too restrictive nowadays for multi-channel (high-dimensional) signals and multi-relational data in the rise of a big-data era. It is crucial to deal with high-dimensional data-arrays, a.k.a. tensors, to manifest the variety and complex interrelations of data. Owing to lack of a sufficient mathematical framework to govern relevant tensor operations, the general tensor LMS filter, whose input is allowed to be an arbitrary tensor, has never been established for realization to the best of our knowledge. In this work, we will dedicate a new mathematical framework for tensors to establish the general tensor least-mean-squares (TLMS) filter theory and propose two novel TLMS algorithms with update rules based on stochastic gradient-descent and Newton's methods, respectively. Furthermore, as we establish the tensor calculus theory, the performance evaluation on convergence-rate and misadjustment for our proposed TLMS filters can be conducted. Finally, the memory and computational complexities of the new TLMS algorithms are also studied in this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call