Abstract

Video data fusion should simultaneously take into account both temporal and spatial dimensions, and therefore a novel spatiotemporal video-fusion algorithm based on motion compensation in the wavelet-transform domain is proposed in this study. The fusion method incorporates motion compensation and the wavelet transform, thus making full use of spatial geometric information and inter-frame temporal information of input videos. The proposed method improves the temporal stability and consistency of the fused video compared to other existing individual frame-based fusion methods. The algorithm first decomposes the image frames which have been processed by an optic flow motion-compensation approach and then develops a spatiotemporal energy-based fusion rule to merge input videos. Experimental results demonstrate that the proposed fusion algorithm has superior visual and quantitative performance to traditional individual-frame-based and state-of-the-art three-dimensional-transform-based methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.