Abstract

Dimension reduction techniques are at the core of the statistical analysis of high-dimensional and functional observations. Whether the data are vector- or function-valued, principal component techniques, in this context, play a central role. The success of principal components in the dimension reduction problem is explained by the fact that, for any $$K\le p$$ , the K first coefficients in the expansion of a p-dimensional random vector $$\mathbf{X}$$ in terms of its principal components is providing the best linear K-dimensional summary of $$\mathbf X$$ in the mean square sense. The same property holds true for a random function and its functional principal component expansion. This optimality feature, however, no longer holds true in a time series context: principal components and functional principal components, when the observations are serially dependent, are losing their optimal dimension reduction property to the so-called dynamic principal components introduced by Brillinger in 1981 in the vector case and, in the functional case, their functional extension proposed by Hormann, Kidzinski and Hallin in 2015.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call