Abstract

Human action recognition which recognizes human actions in a video is a fundamental task in computer vision field. Although multiple existing methods with single-view or multi-view have been presented for human action recognition, these recognition approaches cannot be extended into new action recognition or action classification tasks, as well as discover underlying correlations among different views. To tackle the above problem, this paper proposes a new lifelong multi-view subspace learning framework for continuous human action recognition, which could exploit the complementary information amongst different views from a lifelong learning perspective. More specifically, a set of view-specific libraries is established to gradually store the useful information within multiple views. As a new action recognition task comes, we decompose the model parameters into a set of embedded parameters over view-specific libraries. A latent representation subspace is constructed via encouraging it to be close to different view-specific libraries, which can leverage the high-order correlations among different views and further avoid partial information for action recognition task. Meanwhile, we propose to employ an alternating direction strategy to optimize our proposed method. Empirical studies on real-world multi-view action recognition datasets have shown that our proposed framework attains the superior recognition performance and saves the computational time when continually learning new action recognition tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call