Abstract

Extracting the latent factors of big time series data is an important means to examine the dynamic complex systems under observation. These low-dimensional and “small” representations reveal the key insights to the overall mechanisms, which can otherwise be obscured by the notoriously high dimensionality and scale of big data as well as the enormously complicated interdependencies amongst data elements. However, grand challenges still remain: (1) to incrementally derive the multi-mode factors of the augmenting big data and (2) to achieve this goal under the circumstance of insufficient a priori knowledge. This study develops an incrementally parallel factorization solution (namely I-PARAFAC ) for huge augmenting tensors (multi-way arrays) consisting of three phases over a cutting-edge GPU cluster: in the “giant-step” phase , a variational Bayesian inference ( VBI ) model estimates the distribution of the close neighborhood of each factor in a high confidence level without the need for a priori knowledge of the tensor or problem domain; in the “baby-step” phase , a massively parallel Fast-HALS algorithm (namely G-HALS ) has been developed to derive the accurate subfactors of each subtensor on the basis of the initial factors; in the final fusion phase , I-PARAFAC fuses the known factors of the original tensor and those accurate subfactors of the “increment” to achieve the final full factors. Experimental results indicate that: (1) the VBI model enables a blind factor approximation , where the distribution of the close neighborhood of each final factor can be quickly derived (10 iterations for the test case). As a result, the model of a low time complexity significantly accelerates the derivation of the final accurate factors and lowers the risks of errors; (2) I-PARAFAC significantly outperforms even the latest high performance counterpart when handling augmenting tensors, e.g., the increased overhead is only proportional to the increment while the latter has to repeatedly factorize the whole tensor, and the overhead in fusing subfactors is always minimal; (3) I-PARAFAC can factorize a huge tensor (volume up to 500 TB over 50 nodes) as a whole with the capability several magnitudes higher than conventional methods, and the runtime is in the order of $\frac{1}{n}$ 1 n to the number of compute nodes; (4) I-PARAFAC supports correct factorization-based analysis of a real 4-order EEG dataset captured from a variety of epilepsy patients. Overall, it should also be noted that counterpart methods have to derive the whole tensor from the scratch if the tensor is augmented in any dimension; as a contrast, the I-PARAFAC framework only needs to incrementally compute the full factors of the huge augmented tensor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call