Abstract

Underwater images are often acquired in sub-optimal lighting conditions, in particular at profound depths where the absence of natural light demands the use of artificial lighting. Low-lighting images impose a challenge for both manual and automated analysis, since regions of interest can have low visibility. A new framework capable of significantly enhancing these images is proposed in this article. The framework is based on a novel dehazing mechanism that considers local contrast information in the input images, and offers a solution to three common disadvantages of current single image dehazing methods: oversaturation of radiance, lack of scale-invariance and creation of halos. A novel low-lighting underwater image dataset, OceanDark, is introduced to assist in the development and evaluation of the proposed framework. Experimental results and a comparison with other underwater-specific image enhancement methods show that the proposed framework can be used for significantly improving the visibility in low-lighting underwater images of different scales, without creating undesired dehazing artifacts.

Highlights

  • Recent developments in the infrastructure employed for the exploration of underwater environments such as remotely operated vehicles (ROVs) and cabled ocean observatories have allowed for long-term monitoring of underwater sites

  • Multiple datasets composed by underwater images are available online, for example: TURBID [43], which offers hazy underwater images; the samples used in color restoration processes in [19]; the various datasets from the National Oceanic and Atmospheric Administration (NOAA) [44]; and the underwater stereo vision videos from MARIS [45]

  • The OceanDark dataset does not provide a ground truth reference for the image enhancement framework, given that, for the samples being analyzed, there are no counterpart images with “optimal lighting conditions” available—that is, we are limited to the images captured from the videos as they exist in the archive

Read more

Summary

Introduction

Recent developments in the infrastructure employed for the exploration of underwater environments such as remotely operated vehicles (ROVs) and cabled ocean observatories have allowed for long-term monitoring of underwater sites. Since 2006, ONC has captured more than 90,000 h of underwater video from mobile ROVs and camera systems at fixed locations that contain important information for the understanding of marine environments from the coast to the deep sea This massive amount of data imposes a difficult challenge for scientific analysis, given that the manual investigation of that volume of imagery would require prohibitive amounts of time. Low-lighting images captured underwater present additional light attenuation challenges, given that they are subject to inherent properties of the underwater environment: absorption, which changes the light energy while it moves through the water based on its wavelength, and scattering, in which particles in the water reflect and deflect the light of objects on the way to the camera These phenomena cause blurring as well as the loss of contrast and color. These types of applications motivate the present research study, which aims at enhancing the quality of low-lighting underwater images by using a single-image contrast-guided approach

Background
Contributions
Proposed Approach
Dcp-Based Dehazing of Single Images
Transmission Map Refinement
Disadvantages of the Use of Single-Sized Patches
Result
Experimental Results
The Oceandark Dataset
Contrast-Guided Approach Evaluation
Case Study
Comparison between the Usage of Dynamic and Static Patch Sizes
Enhancement Framework Evaluation with Low-Lighting Underwater Images
Comparison with State-of-the-Art Underwater-Specific Enhancement Frameworks
Conclusions
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call