Abstract

High-dimensional time series may well be the most common type of dataset in the so-called “big data” revolution, and have entered current practice in many areas, including meteorology, genomics, chemometrics, connectomics, complex physics simulations, biological and environmental research, finance and econometrics. The analysis of such datasets poses significant challenges, both from a statistical as well as from a numerical point of view. The most successful procedures so far have been based on dimension reduction techniques and, more particularly, on high-dimensional factor models. Those models have been developed, essentially, within time series econometrics, and deserve being better known in other areas. In this paper, we provide an original time-domain presentation of the methodological foundations of those models (dynamic factor models usually are described via a spectral approach), contrasting such concepts as commonality and idiosyncrasy, factors and common shocks, dynamic and static principal components. That time-domain approach emphasizes the fact that, contrary to the static factor models favored by practitioners, the so-called general dynamic factor model essentially does not impose any constraints on the data-generating process, but follows from a general representation result.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.