In areas ranging from neuroimaging to climate science, advances in data storage and sensor technology have led to a proliferation in multidimensional functional datasets. A common approach to analyzing functional data is to first map the discretely observed functional samples into continuous representations, and then perform downstream statistical analysis on these smooth representations. It is well known that many of the traditional approaches used for 1D functional data representation are plagued by the curse of dimensionality and quickly become intractable as the dimension of the domain increases. In this article, we propose a computational framework for learning continuous representations from a sample of multidimensional functional data that is immune to several manifestations of the curse. The representations are constructed using a set of separable basis functions that are defined to be optimally adapted to the data. We show that the resulting estimation problem can be solved efficiently by the tensor decomposition of a carefully defined reduction transformation of the observed data. Roughness-based regularization is incorporated using a class of differential operator-based penalties. Relevant theoretical properties are also discussed. The advantages of our method over competing methods are thoroughly demonstrated in simulations. We conclude with a real data application of our method to a clinical diffusion MRI dataset. Supplementary materials for this article are available online.
Read full abstract