Abstract

Multiple visual sensor fusion provides aneffective way to improve the robustness and accuracy ofvideo surveillance system. Traditional video fusion methods fuse the source videos using static image fusion methods frame-by-frame without considering the informationin temporal dimension. The temporal information can’t befully utilized in fusion procedure. Aiming at this problem, avisible and infrared video fusion method based on Uniformdiscrete curvelet transform (UDCT) and spatial-temporalinformation is proposed. The source videos are decomposedby using UDCT, and a set of local spatial-temporal energybased fusion rules are designed for decomposition coefficients. In these rules, we consider the current frame’s coefficients and the coefficients on temporal dimension whichare the coefficients of adjacent frames. Experimental results demonstrated that the proposed method works welland outperforms comparison methods in terms of temporalstability and consistency as well as spatial-temporal information extraction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call