Abstract

The moving object segmentation (MOS) in videos with bad weather, irregular motion of objects, camera jitter, shadow and dynamic background scenarios is still an open problem for computer vision applications. To address these issues, in this paper, we propose an approach named as Foreground Generative Adversarial Network (FgGAN) with the recent concepts of generative adversarial network (GAN) and unpaired training for background estimation and foreground segmentation. To the best of our knowledge, this is the first paper with the concept of GAN-based unpaired learning for MOS. Initially, video-wise background is estimated using GAN-based unpaired learning network (network-I). Then, to extract the motion information related to foreground, motion saliency is estimated using estimated background and current video frame. Further, estimated motion saliency is given as input to the GANbased unpaired learning network (network-II) for foreground segmentation. To examine the effectiveness of proposed FgGAN (cascaded networks I and II), the challenging video categories like dynamic background, bad weather, intermittent object motion and shadow are collected from ChangeDetection.net-2014 [26] database. The segmentation accuracy is observed qualitatively and quantitatively in terms of F-measure and percentage of wrong classification (PWC) and compared with the existing state-of-the-art methods. From experimental results, it is evident that the proposed FgGAN shows significant improvement in terms of F-measure and PWC as compared to the existing stateof-the-art methods for MOS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call