Abstract

Multi-exposure image fusion (MEF) is emerging as a research hotspot in the fields of image processing and computer vision, which can integrate images with multiple exposure levels into a full exposure image of high quality. It is an economical and effective way to improve the dynamic range of the imaging system and has broad application prospects. In recent years, with the further development of image representation theories such as multi-scale analysis and deep learning, significant progress has been achieved in this field. This paper comprehensively investigates the current research status of MEF methods. The relevant theories and key technologies for constructing MEF models are analyzed and categorized. The representative MEF methods in each category are introduced and summarized. Then, based on the multi-exposure image sequences in static and dynamic scenes, we present a comparative study for 18 representative MEF approaches using nine commonly used objective fusion metrics. Finally, the key issues of current MEF research are discussed, and a development trend for future research is put forward.

Highlights

  • Multi-exposure image fusion (MEF) is an essential technique for integrating image information with different exposure levels, which can more comprehensively understand the scene

  • To follow the latest development in this field, this paper summarizes the existing MEF methods and presents a literature review

  • These MEF methods can generally be divided into spatial domain, transform domain, and deep learning

Read more

Summary

Introduction

Brightness in a natural scene usually varies greatly. For example, sunlight is about. Compared with the first way, MEF technology provides a simple, economical, and efficient manner to overcome the contradiction between HDR imaging and a low dynamic range (LDR) display. It avoids the complexity of imaging hardware circuit design and reduces the weight and power consumption of the whole device. MEF is a branch of image fusion, similar to other image fusion tasks [5]; for example, multi-focus image fusion, visible and infrared image fusion, PET and MRI medical image fusion, multispectral and panchromatic remote sensing image fusion, hyperspectral and multispectral remote sensing image fusion, and optical and SAR remote sensing image fusion They combine multidimensional content from multiple-source images to generate high-quality images containing more important information.

A Review on MEF
Pixel-Based Methods
Patch-Based Methods
Optimization-Based Methods
Multi-Scale Decomposition-Based Methods
Gradient-Based Methods
Sparse Representation-Based Methods
Other Transform-Based Methods
Deep Learning Methods
Supervised Methods
Unsupervised Methods
HDR Deghosting Methods
Global Exposure Registration
Moving Object Removal
Moving Object Selection or Registration
Image Dataset
Subjective Qualitative Evaluation
Objective Quantitative Comparison
Comparisons of Different MEF Methods
Method
Testing for Static Scene
Testing for Dynamic Scene
Computational Efficiency
Future Prospects
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call