Abstract

High-dynamic-range (HDR) imaging is a digital image processing technique that enhances an image’s visibility by modifying its color and contrast ranges. Generative adversarial networks (GANs) have proven to be potent deep learning models for HDR imaging. However, obtaining a sufficient volume of training image pairs is difficult. This problem has been solved using CycleGAN, but the result of the use of CycleGAN for converting a low-dynamic-range (LDR) image to an HDR image exhibits problematic color distortion, and the intensity of the output image only slightly changes. Therefore, we propose a GAN training optimization model for converting LDR images into HDR images. First, a gamma shift method is proposed for training the GAN model with an extended luminance range. Next, a weighted loss map trains the GAN model for tone compression in the local area of images. Then, a regional fusion training model is used to balance the training method with the regional weight map and the restoring speed of local tone training. Finally, because the generated module tends to show a good performance in bright images, mean gamma tuning is used to evaluate image luminance channels, which are then fed into modules. Tests are conducted on foggy, dark surrounding, bright surrounding, and high-contrast images. The proposed model outperforms conventional models in a comparison test. The proposed model complements the performance of an object detection model even in a real night environment. The model can be used in commercial closed-circuit television surveillance systems and the security industry.

Highlights

  • IntroductionKwon et al [2] proposed a method for synthesizing an HDR image using spatial and intensity weighting using two exposure images

  • The results show that the proposed methods, Lum CycleGAN and exposure values (EVs) Generative adversarial networks (GANs), have similar computational times because they use a gray image

  • The use of a GAN training optimization model for converting LDR images into HDR images is proposed in this study

Read more

Summary

Introduction

Kwon et al [2] proposed a method for synthesizing an HDR image using spatial and intensity weighting using two exposure images These conventional methods require several low-dynamic-range (LDR) images to generate an HDR image, and ghosting artifacts occur when a moving object exists in an image. Alternative methods have been proposed for inferring an HDR image from a single LDR image, such as inverse tone mapping (iTM) [3]. Inverse Tone Mapping iTM is a method that converts a single LDR image into an HDR image. It expands the contrast range of an LDR image using a single LDR image with missing information. Rempel et al [4] proposed a model that could improve the range of legacy videos and photographs for viewing

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call