Abstract

Many applications involve estimation of a signal matrix from a noisy data matrix. In such cases, it has been observed that estimators that shrink or truncate the singular values of the data matrix perform well when the signal matrix has approximately low rank. In this article, we generalize this approach to the estimation of a tensor of parameters from noisy tensor data. We develop new classes of estimators that shrink or threshold the mode-specific singular values from the higher-order singular value decomposition. These classes of estimators are indexed by tuning parameters, which we adaptively choose from the data by minimizing Stein's unbiased risk estimate. In particular, this procedure provides a way to estimate the multilinear rank of the underlying signal tensor. Using simulation studies under a variety of conditions, we show that our estimators perform well when the mean tensor has approximately low multilinear rank, and perform competitively when the signal tensor does not have approximately low multilinear rank. We illustrate the use of these methods in an application to multivariate relational data.

Highlights

  • Tensor data arise in fields as diverse as relational data [Hoff, 2014], neuroimaging [Zhang et al, 2014, Li and Zhang, 2015], psychometrics [Kiers and Mechelen, 2001], chemometrics [Smilde et al, 2005, Bro, 2006], signal processing [Cichocki et al, 2014], and machine learning [Tao et al, 2005], among others [Kroonenberg, 2008]

  • This paper introduced new classes of shrinkage estimators for tensor-valued data that are higherorder generalizations of existing matrix spectral estimators

  • Each class is indexed by tuning parameters whose values we chose by minimizing an unbiased estimate of the risk

Read more

Summary

Introduction

Tensor data arise in fields as diverse as relational data [Hoff, 2014], neuroimaging [Zhang et al, 2014, Li and Zhang, 2015], psychometrics [Kiers and Mechelen, 2001], chemometrics [Smilde et al, 2005, Bro, 2006], signal processing [Cichocki et al, 2014], and machine learning [Tao et al, 2005], among others [Kroonenberg, 2008]. More recent work has focused on estimators whose functions fi(·) induce sparsity in the singular values, which may be more appropriate than (5), (6), and (7) in cases where the true signal itself has (approximately) low rank. We introduce a family of estimators that shrink tensor-valued data towards having (approximately) low multilinear rank. We perform this shrinkage on a reparameterization of the higher-order singular value decomposition (HOSVD) of De Lathauwer et al [2000], where we shrink the mode-specific singular values of the data tensor towards zero. We present two specific estimators that shrink the data tensor towards having (approximately) low multilinear rank and provide some discussion on the intuition behind these estimators.

The higher-order SVD and higher-order spectral estimators
Stein’s unbiased risk estimate
Differentials of the HOSVD
Divergence of higher-order spectral estimators
Simulation studies
Multivariate relational data example
Discussion
A Simplification of the divergence
B Newton step for optimization
C General spectral functions
Findings
D SURE for estimators that shrink elements in S
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call