Abstract

In this paper, a novel de-ghosting image fusion technique is presented, which enhances the quality of low dynamic range images using multi-level exposures taken from the ordinary camera and also removes the ghosting artifact. In the proposed algorithm, first, the source images, taken under different exposure settings, are decomposed into base and detail layers using two-scale decomposition. The base and detail layers contain small and large-scale variation details of the source images, respectively. The Laplacian-of-Gaussian filter is applied to the source images to get the edge information. Afterward, the saliency map of the edges is computed. To remove the ghosting artifacts, a weight matrix is calculated by applying the median filter on the histogram equalized source images. The weight matrix is combined with the saliency map to generate more accurate weights. The separate weights for the base and detail layers are calculated using guided image filters. Finally, the base and detail layers’ weights are fused with the source images to generate a vivid and enhanced image without any artifacts. The proposed technique is evaluated both qualitatively and quantitatively. The comparison of our technique in terms of Yang’s Metric ( $Q_{Y}$ ), Quality Mutual Information ( $Q_{MI}$ ), Gradient-based Fusion Metric ( $Q_{G}$ ) and Chen Blum’s Metric ( $Q_{CB}$ ) with other state-of-the-art techniques proves that the proposed technique outperforms existing techniques.

Highlights

  • T HE images captured by ordinary digital cameras do not contain the entire details of the real-world scenes [1]. This is due to the fact that the dynamic range of the real-world is large; whereas, the sensors deployed in ordinary cameras can capture only a tiny range of it [2]

  • There are two approaches to overcome this problem of significant difference between the High Dynamic Range (HDR) of the real-world and the Low Dynamic Range (LDR) of digital images, namely: the hardware-based approach and the softwarebased approach

  • We compare our method with multi-exposure image fusion (MEF) and ghosting artifact removing techniques that are presented for dynamic or static scenes

Read more

Summary

Introduction

T HE images captured by ordinary digital cameras do not contain the entire details of the real-world scenes [1]. This is due to the fact that the dynamic range of the real-world is large; whereas, the sensors deployed in ordinary cameras can capture only a tiny range of it [2]. There are two approaches to overcome this problem of significant difference between the High Dynamic Range (HDR) of the real-world and the Low Dynamic Range (LDR) of digital images, namely: the hardware-based approach and the softwarebased approach. In the hardware-based approach, cameras are equipped with sensors having HDR imaging capabilities.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call