Abstract
A novel multi-exposure image fusion method is proposed for solving the problems of color distortion and detail loss through adaptive image patch segmentation. First, we use the super-pixel segmentation approach to divide the input images into the non-overlapping image patches composed of pixels with similar visual properties. Then, the image patches are decomposed into three independent components: signal strength, image structure and intensity. The three components are fused separately based on characteristics of human vision system and exposure level of input image. While, guided filtering is used to remove the blocking artifacts caused by patch-wise processing. In contrast to the existing methods which use fixed-size patches, the proposed method avoids blocking effect and preserves the color attribute of the input images. The experimental results show that the proposed method has advantages both in subjective and objective evaluation over the state-of-the-art multi-exposure fusion methods.
Highlights
The dynamic range of the natural scene is much larger than that of images captured by ordinary consumptive cameras [1]
The image patches are decomposed into three independent components: signal strength, image structure and intensity
The proposed method uses a super-pixel segmentation approach to divide the input images into the image patches composed of pixels with similar visual properties
Summary
The dynamic range of the natural scene is much larger than that of images captured by ordinary consumptive cameras [1]. Patch-based MEF methods have attracted more attention [8] These methods divide multi-exposure images into fixed-size rectangular patches, and perform image fusion process patch-wisely. Few researchers studied the impact of patch division on the quality of the fused image To address this issue, we propose a novel patch-based multi-exposure image fusion method. To the best of our knowledge, the paper by Li et al [8] is the most similar work to our method Their method selects the optimal patch size based on the texture entropy of the input images. 2) The weight map is calculated patch-wisely based on characteristics of human vision system (HSV) and exposure level of the input images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.