Abstract
Moving Objects Segmentation (MOS) is a crucial step in various computer vision applications, such as visual object tracking, autonomous vehicles, human activity analysis, surveillance, and security. Existing MOS approaches suffer from performance degradation due to extreme challenging conditions in real world complex environments such as varying illumination conditions, camouflage objects, dynamic backgrounds, shadows, bad weathers and camera jitters. To address these problems we proposed a novel generative adversarial based framework for moving objects segmentation. Our framework works with one classifier discriminator, one representation learning network and one generator jointly trained to perform MOS in various challenging scenarios. During training the discriminator network acts as a decision maker between real and fake training samples using conditional least squares loss. While the representation learning network provides the difference between the deep features of real and fake training samples using content loss formulation. Another loss term we have exploited to train our generator network is the reconstruction loss that minimizes the difference between the spatial information of real and fake training samples. Moreover, we also propose a novel modified U-net architecture for our generator network showing improved performance over Vanilla U-net model. Experimental evaluations of our proposed method on four benchmark datasets in comparison with thirty-two existing methods has demonstrated the strength of our proposed model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.