Abstract

In this paper, we propose an effective 3D patch-based match and fusion method by taking account of dynamic or multi-view scenes in a multi-exposure image sequence using two-scale decomposition. As opposed to most multi-exposure image fusion methods, the proposed method does not require a pre-alignment step to reduce ghosting artifacts. Considering that pixel values are affected by multi-view or multi-exposure scenes, we use a uniform matching approach to match and find similar patches in different exposure images and then fuse them at each scale. By searching all similar patches (instead of the optimal patch) in the searching window to form the 3D patch when facing limited image information, we can fully use the complementary information of the multi-exposure images to preserve information about moving objects and scene details. The experimental results show that the proposed method not only performs well on dynamic scenes but also consistently generates high-quality fused images in multi-view scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call