Abstract
Tensor decompositions have found many applications in signal processing, data mining, machine learning, etc. In particular, the block term decomposition (BTD), which is a generalization of CP decomposition and Tucker decomposition/HOSVD, has been successfully used for the compression and acceleration of neural networks. However, computing BTD is NP-hard, and optimization based methods usually suffer from slow convergence or even fail to converge, which limits the applications of BTD. This paper considers a “blind” block term decomposition (BBTD) of high order tensors, in which the block diagonal structure of the core tensor is unknown. Our contributions include: 1) We establish the necessary and sufficient conditions for the existence of BTD, characterize the condition when a BTD solves the BBTD problem, and show that the BBTD is unique under a “low rank” assumption. 2) We propose an algebraic method to compute the BBTD. This method transforms the problem of determining the block diagonal structure of the core tensor into a clustering problem of complex numbers, in polynomial time. And once the clustering problem is solved, the BBTD can be obtained via computing several matrix decompositions. Numerical results show that our method is able to compute the BBTD, even in the presence of noise to some extent, whereas optimization based methods (e.g., MINF and NLS in TENSORLAB) may fail to converge.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the AAAI Conference on Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.