Abstract

As many data in practical applications occur or can be captured in multiple views form, multi-view action recognition has received much attention recently, due to utilizing certain complementary and heterogeneous information in various views to promote the downstream task. However, most existing methods assume that multi-view data is complete, which may not always be met in real-world applications.To this end, in this paper, a novel View Knowledge Transfer Network (VKTNet) is proposed to handle multi-view action recognition, even when some views are incomplete. Specifically, the view knowledge transferring is utilized using conditional generative adversarial network(cGAN) to reproduce each view's latent representation, conditioning on the other view's information. As such, the high-level semantic features are effectively extracted to bridge the semantic gap between two different views. In addition, in order to efficiently fuse the decision result achieved by each view, a Siamese Scaling Network(SSN) is proposed instead of simply using a classifier. Experimental results show that our model achieves the superiority performance, on three public datasets, against others when all the views are available. Meanwhile, the degradation of performance is avoided under the case that some views are missing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call