Abstract

Human action recognition is a hot topic in computer vision field. Various applicable approaches have been proposed to recognize different types of actions. However, the recognition performance deteriorates rapidly when the viewpoint changes. Traditional approaches aim to address the problem by inductive transfer learning, in which target-view samples are manually labeled. In this paper, we present a novel approach for cross-view action recognition based on transductive transfer learning. We address the problem by transferring instances across views. In our settings, both labels of examples from the target view and the corresponding relation between examples from pairwise views are dispensable. Experimental results on the IXMAS multi-view data set demonstrate the effectiveness of our approach, and are comparable to the state of the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call