In the domain of image processing, Multi-Exposure Image Fusion (MEF) emerges as a crucial technique for developing high dynamic range (HDR) representations from fusing sequences of low dynamic range images. Conventional fusion methods often suffer from shortcomings such as detail loss, edge artifacts, and color inconsistencies, thereby compromising the quality of the fused output which is further diminished with extremely exposed and limited inputs. While there have been a few efforts to conduct fusion on limited and impaired static input images, there has been no exploration into the fusion of dynamic image sets. This paper proposes an effective MEF approach that operates on a minimum of two extremely exposed, limited datasets of both static and dynamic scenes. The approach initiates with categorizing input images into under-exposed and over-exposed categories based on lighting levels, subsequently applying tailored exposure correction strategies. Through iterative refinement and selection of optimally exposed variant, we construct an advanced intermediate stack, upon which fusion is performed by a pyramidal fusion technique. The method relies on adaptive well-exposedness and color gradient to develop weight maps for pyramidal fusion. The initial weights are refined using a Gaussian filter and this results in the creation of a seamlessly fused image with expanded dynamic range. Additionally, for dynamic imagery, we propose an adaptive color dissimilarity and dynamic equalization to reduce ghosting artifacts. Comparative assessments against existing methodologies, both visually and empirically confirms the superior performance of the proposed model.
Read full abstract