Abstract
This paper proposes a new algorithm, named Multi-Task Robust Principal Component Analysis (MTRPCA), to collaboratively integrate multiple visual features and motion priors for human motion segmentation. Given the video data described by multiple features, the human motion part is obtained by jointly decomposing multiple feature matrices into pairs of low-rank and sparse matrices. The inference process is formulated as a convex optimization problem that minimizes a constrained combination of nuclear norm and l 2,1 -norm, which can be solved efficiently with Augmented Lagrange Multiplier (ALM) method. Compared to previous methods, which usually make use of individual features, the proposed method seamlessly integrates multiple features and priors within a single inference step, and thus produces more accurate and reliable results. Experiments on the HumanEva human motion dataset show that the proposed MTRPCA is novel and promising.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have