Abstract

Due to the deteriorated conditions of illumination lack and uneven lighting, the performance of traditional background modeling methods is greatly limited for the surveillance of nighttime video. To make background modeling under nighttime scene performs as well as in daytime condition, we put forward a promising generation-based background modeling framework for foreground surveillance. With a pre-specified daytime reference image as background frame, the GAN based generation model, called N2DGAN, is trained to transfer each frame of nighttime video to a virtual daytime image with the same scene to the reference image except for the foreground part. Specifically, to balance the preservation of background scene and the foreground object(s) in generating the virtual daytime image, we presented a two-pathway generation model, in which the global and local sub-networks were well combined with spatial and temporal consistency constraints. For the sequence of generated virtual daytime images, a multi-scale Bayes model was further proposed to characterize pertinently the temporal variation of background. We manually labeled ground truth on the collected nightime video datasets for performance evaluation. The impressive results illustrated in both the main paper and supplementary show the effectiveness of our proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.