Abstract

Facing with rapidly increasing demands for analyzing high-order data or multiway data, feature-extracting methods become imperative for analysis and processing. The traditional feature-extracting methods, however, either need to overly vectorize the data and smash the original structure hidden in data, such as PCA and PCA-like methods, which is unfavorable to the data recovery, or cannot eliminate the redundant information very well, such as tucker decomposition (TD) and TD-like methods. To overcome these limitations, we propose a more flexible and more powerful tool, called the multiview principal components analysis (Multiview-PCA) in this article. By segmenting a random tensor into equal-sized subarrays called sections and maximizing variations caused by orthogonal projections of these sections, the Multiview-PCA finds principal components in a parsimonious and flexible way. In so doing, two new operations on tensors, the S -direction inner/outer product, are introduced to formulate tensor projection and recovery. With different segmentation ways characterized by section depth and direction, the Multiview-PCA can be implemented many times in different ways, which defines the sequential and global Multiview-PCA, respectively. These multiple Multiview-PCA take the PCA and PCA-like, and TD and TD-like as the special cases, which correspond to the deepest section depth and the shallowest section depth, respectively. We propose an adaptive depth and direction selection algorithm for the implementation of Multiview-PCA. The Multiview-PCA is then tested in terms of subspace recovery ability, compression ability, and feature extraction performance when applied to a set of artificial data, surveillance videos, and hyperspectral imaging data. All numerical results support the flexibility, effectiveness, and usefulness of Multiview-PCA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call