Abstract

This paper investigates how to fuse grayscale and thermal video data for detecting foreground objects in challenging scenarios. To this end, we propose an intuitive yet effective method called weighted low-rank decomposition (WELD), which adaptively pursues the cross-modality low-rank representation. Specifically, we form two data matrices by accumulating sequential frames from the grayscale and the thermal videos, respectively. Within these two observing matrices, WELD detects moving foreground pixels as sparse outliers against the low-rank structure background and incorporates the weight variables to make the models of two modalities complementary to each other. The smoothness constraints of object motion are also introduced in WELD to further improve the robustness to noises. For optimization, we propose an iterative algorithm to efficiently solve the low-rank models with three subproblems. Moreover, we utilize an edge-preserving filtering-based method to substantially speed up WELD while preserving its accuracy. To provide a comprehensive evaluation benchmark of grayscale-thermal foreground detection, we create a new data set including 25 aligned grayscale-thermal video pairs with high diversity. Our extensive experiments on both the newly created data set and the public data set OSU3 suggest that WELD achieves superior performance and comparable efficiency against other state-of-the-art approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.