Abstract

Tensor data (i.e., the data having multiple dimensions) are quickly growing in scale in many practical applications, which poses new challenges for data modeling and analysis approaches, such as high-order relations of large complexity, gross noise, and varying data scale. Existing low-rank data analysis methods, which are effective at analyzing matrix data, may fail in the regime of tensor data due to these challenges. A robust and scalable low-rank tensor modeling method is heavily desired. In this paper, we develop an online robust low-rank tensor modeling (ORLTM) method to address these challenges. The ORLTM method leverages the high-order correlations among all tensor modes to model an intrinsic low-rank structure of streaming tensor data online and can effectively analyze data residing in a mixture of multiple subspaces by virtue of dictionary learning. ORLTM consumes a very limited memory space that remains constant regardless of the increase of tensor data size, which facilitates processing tensor data at a large scale. More concretely, it models each mode unfolding of streaming tensor data using the bilinear formulation of tensor nuclear norms. With this reformulation, ORLTM employs a stochastic optimization algorithm to learn the tensor low-rank structure alternatively for online updating. To capture the final tensors, ORLTM uses an average pooling operation on folded tensors in all modes. We also provide the analysis regarding computational complexity, memory cost, and convergence. Moreover, we extend ORLTM to the image alignment scenario by incorporating the geometrical transformations and linearizing the constraints. Extensive empirical studies on synthetic database and three practical vision tasks, including video background subtraction, image alignment, and visual tracking, have demonstrated the superiority of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.