Abstract

Image restoration techniques process degraded images to highlight obscure details or enhance the scene with good contrast and vivid color for the best possible visibility. Poor illumination condition causes issues, such as high-level noise, unlikely color or texture distortions, nonuniform exposure, halo artifacts, and lack of sharpness in the images. This article presents a novel end-to-end trainable deep convolutional neural network called the deep perceptual image enhancement network (DPIENet) to address these challenges. The novel contributions of the proposed work are: 1) a framework to synthesize multiple exposures from a single image and utilizing the exposure variation to restore the image and 2) a loss function based on the approximation of the logarithmic response of the human eye. Extensive computer simulations on the benchmark MIT-Adobe FiveK and user studies performed using Google high dynamic range, DIV2K, and low light image datasets show that DPIENet has clear advantages over state-of-the-art techniques. It has the potential to be useful for many everyday applications such as modernizing traditional camera technologies that currently capture images/videos with under/overexposed regions due to their sensors limitations, to be used in consumer photography to help the users capture appealing images, or for a variety of intelligent systems, including automated driving and video surveillance applications.

Highlights

  • I MAGES and videos capture a vast amount of rich and detailed information about the scene

  • Validation, and testing purposes, the MITAdobe FiveK dataset [64] is employed. This dataset contains 5000 photographs taken with SLR cameras by various photographers

  • These photographs covered a broad range of scenes, objects, subjects, and lighting conditions

Read more

Summary

Introduction

I MAGES and videos capture a vast amount of rich and detailed information about the scene. Intelligent systems use these captured images for various computer vision tasks, such as image enhancement, object detection, classification and recognition, segmentation, 3-D scene understanding, and modeling [1]. These tasks form the building block for real-world. Manuscript received July 6, 2021; revised December 15, 2021; accepted December 31, 2021.

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call