Abstract

In the last few years, great efforts have been made to extend the linear projection technique (LPT) for multidimensional data (i.e., tensor), generally referred to as the multilinear projection technique (MPT). The vectorized nature of LPT requires high-dimensional data to be converted into vector, and hence may lose spatial neighborhood information of raw data. MPT well addresses this problem by encoding multidimensional data as general tensors of a second or even higher order. In this paper, we propose a novel multilinear projection technique, called multilinear spatial discriminant analysis (MSDA), to identify the underlying manifold of high-order tensor data. MSDA considers both the nonlocal structure and the local structure of data in the transform domain, seeking to learn the projection matrices from all directions of tensor data that simultaneously maximize the nonlocal structure and minimize the local structure. Different from multilinear principal component analysis (MPCA) that aims to preserve the global structure and tensor locality preserving projection (TLPP) that is in favor of preserving the local structure, MSDA seeks a tradeoff between the nonlocal (global) and local structures so as to drive its discriminant information from the range of the non-local structure and the range of the local structure. This spatial discriminant characteristic makes MSDA have more powerful manifold preserving ability than TLPP and MPCA. Theoretical analysis shows that traditional MPTs, such as multilinear linear discriminant analysis, TLPP, MPCA, and tensor maximum margin criterion, could be derived from the MSDA model by setting different graphs and constraints. Extensive experiments on face databases (ORL, CMU PIE, and the extended Yale-B) and the Weizmann action database demonstrate the effectiveness of the proposed MSDA method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call