Abstract

Video data fusion should simultaneously take into account both temporal and spatial dimensions, and therefore a novel spatiotemporal video-fusion algorithm based on motion compensation in the wavelet-transform domain is proposed in this study. The fusion method incorporates motion compensation and the wavelet transform, thus making full use of spatial geometric information and inter-frame temporal information of input videos. The proposed method improves the temporal stability and consistency of the fused video compared to other existing individual frame-based fusion methods. The algorithm first decomposes the image frames which have been processed by an optic flow motion-compensation approach and then develops a spatiotemporal energy-based fusion rule to merge input videos. Experimental results demonstrate that the proposed fusion algorithm has superior visual and quantitative performance to traditional individual-frame-based and state-of-the-art three-dimensional-transform-based methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call