Abstract

Video surveillance systems have become extremely important recently. It has been observed that information extracted from a single spectrum video is often insufficient in adverse conditions like low illumination, shadowing, smoke, dust, unstable background, and camouflage. Real-time video processing systems also need to be very fast where future frames are often unavailable at the time of processing the current frame. In this paper, we propose a superpixel-based causal multisensor video fusion algorithm suitable for real-time surveillance tasks. We develop new superpixel level spatial and temporal saliency models. Novel superpixel level multiple fusion rules are also designed to obtain the fused output. Comprehensive comparisons with several existing works clearly indicate the benefit of our solution.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call