Abstract

Foreground detection is a classic video processing task, widely used in video surveillance and other fields, and is the basic step of many computer vision tasks. The scene in the real world is complex and changeable, and it is difficult for traditional unsupervised methods to accurately extract foreground targets. Based on deep learning theory, this paper proposes a foreground detection method based on the multiscale U-Net architecture with a fusion attention mechanism. The attention mechanism is introduced into the U-Net multiscale architecture through skip connections, causing the network model to pay more attention to the foreground objects, suppressing irrelevant background regions, and improving the learning ability of the model. We conducted experiments and evaluations on the CDnet-2014 dataset. The proposed model inputs a single RGB image and only utilizes spatial information, with an overall F-measure of 0.9785. The input of multiple images is fused, and the overall F-measure can reach 0.9830 by using spatiotemporal information. Especially in the Low Framerate category, the F-measure exceeds the current state-of-the-art methods. The experimental results demonstrate the effectiveness and superiority of our proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.