Abstract

Most of the earlier proposed action prediction methods, such as PoseGAN, C2GAN, etc. are limited to unidirectional prediction and not intended to perform the desired class of action. In this paper, we propose an action-guided Cycle Generative Adversarial Network (AGCGAN), which is a bi-directional video prediction model to anticipate future frames from current visual frames and vice versa. The proposed CycleGAN architecture consists of two generators and two discriminators along with two additional keypoint detectors to make keypoint loss measurements. The adversarial and cycle losses perform the appearance modelling whereas the keypoint loss provides the necessary motion correction. We evaluated the effectiveness of our proposed method on the UT-Interaction dataset. Furthermore, we also tested our model on the standard Market-1501 dataset against the state-of-the-art schemes. Results show that our proposed method provides comparable visual quality with the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call