Abstract

High dynamic range (HDR) has wide applications involving intelligent vision sensing which includes enhanced electronic imaging, smart surveillance, self-driving cars, intelligent medical diagnosis, etc. Exposure fusion is an essential HDR technique which fuses different exposures of the same scene into an HDR-like image. However, determining the appropriate fusion weights is difficult because each differently exposed image only contains a subset of the scene’s details. When blending, the problem of local color inconsistency is more challenging; thus, it often requires manual tuning to avoid image artifacts. To address this problem, we present an adaptive coarse-to-fine searching approach to find the optimal fusion weights. In the coarse-tuning stage, fuzzy logic is used to efficiently decide the initial weights. In the fine-tuning stage, the multivariate normal conditional random field model is used to adjust the fuzzy-based initial weights which allows us to consider both intra- and inter-image information in the data. Moreover, a multiscale enhanced fusion scheme is proposed to blend input images when maintaining the details in each scale-level. The proposed fuzzy-based MNCRF (Multivariate Normal Conditional Random Fields) fusion method provided a smoother blending result and a more natural look. Meanwhile, the details in the highlighted and dark regions were preserved simultaneously. The experimental results demonstrated that our work outperformed the state-of-the-art methods not only in several objective quality measures but also in a user study analysis.

Highlights

  • All surroundings have a large dynamic range—the luminance of the highlight region might be over one hundred thousand times larger than that of the dark region

  • One of the most typical exposure fusion methods is the method proposed in Reference [4], which determines the pixel weights by considering different properties at the same time

  • For the result of the method in Reference [23] (Figure 6d), some white noiselike dots can be seen on the floor

Read more

Summary

Introduction

All surroundings have a large dynamic range—the luminance of the highlight region might be over one hundred thousand times larger than that of the dark region. In Reference [10], Kinoshita and Kiya proposed a segmentation-based approach for luminance adjustment and enhancement, which can be applied in input differently exposed images to improve the quality of the final fused image. After determining the optimal weights by the fuzzy-MNCRF model, this paper adopted the pyramid decomposition scheme for multi-scale fusion of differently exposed images. In Reference [22], an edge-preserving smoothing pyramid, which is based on the gradient domain-guided image filter (GGIF) [23], is proposed to preserve the details in the brightest or darkest regions for multi-scale exposure fusion. Compared to the above methods, this work presents a detail preservation scheme; we utilized the MNCRF model to fine-tune the weight maps (before the multi-scale fusion stage) for pleasing image quality.

Motivation of Integrating Fuzzy Logic with MNCRF Model
Proposed Approach
Fuzzy-Based Pixel Weights Initialization
Weight Fine-Tuned Using the MNCRF Model
Inter-Image Relationships
Intra-Image Relationships
Reference
Experimental Results and Discussions
Comparison of the Objective Quality Measures
Method
Visual Comparison and User Study Analysis
Visual comparison of the exposure fusion results using the test
Results from the proposed proposed
Results
Result of of
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call