A Two-Step Approach for Underwater Image Enhancement

  • Abstract
  • Literature Map
  • References
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Underwater photography is important for the works of exploring underwater scenes. Comparing with normal camera images, the underwater images usually have low brightness-contrast and with low visual quality. This paper proposes a new method to improve the underwater image visual quality by using a two-step approach. Firstly, a transmission map based enhancement is applied to increase the image global contrast, similar to the image defog algorithm. Secondly, image details are extracted and the local contrast is improved by applying edge-preserving filter. Experiments demonstrated that the proposed two-step approach can significantly improve the visual quality of underwater images.

ReferencesShowing 10 of 11 papers
  • Cite Count Icon 773
  • 10.1109/cvpr.2009.5206515
Single image haze removal using dark channel prior
  • Jun 1, 2009
  • Kaiming He + 2 more

  • Cite Count Icon 1344
  • 10.1145/1360612.1360666
Edge-preserving decompositions for multi-scale tone and detail manipulation
  • Aug 1, 2008
  • ACM Transactions on Graphics
  • Zeev Farbman + 3 more

  • Open Access Icon
  • Cite Count Icon 390
  • 10.1007/11744085_44
A Fast Approximation of the Bilateral Filter Using a Signal Processing Approach
  • Jan 1, 2006
  • Sylvain Paris + 1 more

  • Open Access Icon
  • Cite Count Icon 391
  • 10.1111/j.1467-8659.2009.01645.x
Fast High‐Dimensional Filtering Using the Permutohedral Lattice
  • May 1, 2010
  • Computer Graphics Forum
  • Andrew Adams + 2 more

  • Cite Count Icon 816
  • 10.1109/cvpr.2012.6247661
Enhancing underwater images and videos by fusion
  • Jun 1, 2012
  • C Ancuti + 3 more

  • Cite Count Icon 51
  • 10.1016/j.patcog.2008.11.004
Edge-preserving smoothing using a similarity measure in adaptive geodesic neighbourhoods
  • Nov 18, 2008
  • Pattern Recognition
  • Jacopo Grazzini + 1 more

  • Cite Count Icon 171
  • 10.1145/1399504.1360666
Edge-preserving decompositions for multi-scale tone and detail manipulation
  • Aug 1, 2008
  • Zeev Farbman + 3 more

  • Cite Count Icon 23
  • 10.4304/jcp.8.4.904-911
Effective Single Underwater Image Enhancement by Fusion
  • Jan 4, 2013
  • Journal of Computers
  • Shuai Fang + 3 more

  • Cite Count Icon 1914
  • 10.1145/1360612.1360671
Single image dehazing
  • Aug 1, 2008
  • ACM Transactions on Graphics
  • Raanan Fattal

  • Cite Count Icon 380
  • 10.1145/1399504.1360671
Single image dehazing
  • Aug 1, 2008
  • Raanan Fattal

Similar Papers
  • Research Article
  • 10.48084/etasr.9067
Underwater Image Enhancement using Convolution Denoising Network and Blind Convolution
  • Feb 2, 2025
  • Engineering, Technology & Applied Science Research
  • Shubhangi Adagale-Vairagar + 2 more

Underwater Image Enhancement (UWIE) is essential for improving the quality of Underwater Images (UWIs). However, recent UWIE methods face challenges due to low lighting conditions, contrast issues, color distortion, lower visibility, stability and buoyancy, pressure and temperature, and white balancing problems. Traditional techniques cannot capture the fine changes in UWI texture and cannot learn complex patterns. This study presents a UWIE Network (UWIE-Net) based on a parallel combination of a denoising Deep Convolution Neural Network (DCNN) and blind convolution to improve the overall visual quality of UWIs. The DCNN is used to depict the UWI complex pattern features and focuses on enhancing the image's contrast, color, and texture. Blind convolution is employed in parallel to minimize noise and irregularities in the image texture. Finally, the images obtained at the two parallel layers are fused using wavelet fusion to preserve the edge and texture information of the final enhanced UWI. The effectiveness of UWIE-Net was evaluated on the Underwater Image Enhancement Benchmark Dataset (UIEB), achieving MSE of 23.5, PSNR of 34.42, AG of 13.56, PCQI of 1.23, and UCIQE of 0.83. The UWIE-Net shows notable improvement in the overall visual and structural quality of UWIs compared to existing state-of-the-art methods.

  • Conference Article
  • 10.1109/ieeeconf38699.2020.9389099
Two-subnet fusion for perceptual quality driven based underwater image enhancement
  • Oct 5, 2020
  • Shengcong Wu + 3 more

In recent years, enhancement of underwater images based on generative adversarial network (GAN) has been widely used. Aiming at the shortcoming that the texture details of the images generated by GAN are not clear enough, an underwater image pre-process unit and a generator including a perceptual subnet and a refine subnet are designed to improve visual quality of underwater images. The role of the perceptual subnet is to maintain the structure and semantic information of the input image. Refine subnet is to make the generated image clearer in the details of the texture. Specially, considering that underwater images may prevent the network from learning effectively, an underwater image pre-process unit is proposed to improve the quality of underwater images. To maintain the structure and semantic information of underwater images and preserve texture details of underwater images, the perceptual subnet and the refine subnet are proposed, respectively. Then, color-structure perception loss is proposed to obtain good performance in color and structure, and content loss and detail loss are proposed to keep content consistent and sharper texture. Ablation study verify the rationality and effectiveness of each loss function. Finally, subjective and objective experiments prove that the proposed method can obtain underwater images with higher visual quality and clearer texture details.

  • Research Article
  • Cite Count Icon 2
  • 10.1093/icesjms/fsae004
An in-situ image enhancement method for the detection of marine organisms by remotely operated vehicles
  • Feb 7, 2024
  • ICES Journal of Marine Science
  • Wenjia Ouyang + 3 more

With the assistance of the visual system, remote operated vehicles (ROVs) can replace frogmen to achieve safer and more efficient capturing of marine organisms. However, the selective absorption and scattering of light lead to a decrease in the visual quality of underwater images, which hinders ROV operators from observing the operating environment. Unfortunately, most image enhancement methods only focus on image color correction rather than perceptual enhancement, which in turn prevents the object detector from quickly locating the target. Therefore, a visual-enhanced and detection-friendly underwater image enhancement method is needed. In this paper, an underwater image enhancement method called in-situ enhancement is proposed to improve the semantic information of the visual hierarchy based on current scene information in multiple stages. Mapping the underwater image to its dual space allows the enhancement equation to be applied to severely degraded underwater scenes. Moreover, it is also a detection-friendly method and has good generalization in both visual quality improvement and object detection. The experimental results show that in different underwater datasets, the in-situ enhancement effectively improves the visual quality of underwater images, and its enhanced results train different object detectors with high detection accuracy.

  • Research Article
  • 10.1142/s0218001425550080
Underwater Image Enhancement Based on U-Net Architecture and Channel Attention Mechanism Fusion Generative Adversarial Network
  • Jun 5, 2025
  • International Journal of Pattern Recognition and Artificial Intelligence
  • Gang Li + 2 more

In response to the challenges of blur distortion, low contrast and color fading in underwater images, caused by complex environmental factors and light attenuation, this study presents a novel underwater image enhancement method that leverages the U-Net architecture and channel attention mechanism fusion generative adversarial network (GAN), named UAEGAN. UAEGAN is built on the framework of GAN, combining the U-Net structure with a channel attention mechanism to construct a generator network, reducing the loss of low-level information during feature extraction and enhancing image details. Additionally, the algorithm employs a PatchGAN discriminator, which improves image resolution and detail representation by performing fine-grained true/false judgments on local image patches. Finally, the visual quality of the enhanced image is further optimized through the weighted fusion of multiple loss functions. Experimental results on the UIEB dataset indicate that UAEGAN outperforms the latest methods in terms of both visual quality and numerical metrics. The algorithm effectively enhances the clarity and visual quality of underwater images, providing strong support for subsequent underwater image processing tasks and applications.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 18
  • 10.1177/1729881420961643
An approach for underwater image enhancement based on color correction and dehazing
  • Sep 1, 2020
  • International Journal of Advanced Robotic Systems
  • Yue Zhang + 2 more

Due to the absorption and scattering effect on light when traveling in water, underwater images exhibit serious weakening such as color deviation, low contrast, and blurry details. Traditional algorithms have certain limitations in the case of these images with varying degrees of fuzziness and color deviation. To address these problems, a new approach for single underwater image enhancement based on fusion technology was proposed in this article. First, the original image is preprocessed by the white balance algorithm and dark channel prior dehazing technologies, respectively; then two input images were obtained by color correction and contrast enhancement; and finally, the enhanced image was obtained by utilizing the multiscale fusion strategy which is based on the weighted maps constructed by combining the features of global contrast, local contrast, saliency, and exposedness. Qualitative results revealed that the proposed approach significantly removed haze, corrected color deviation, and preserved image naturalness. For quantitative results, the test with 400 underwater images showed that the proposed approach produced a lower average value of mean square error and a higher average value of peak signal-to-noise ratio than the compared method. Moreover, the enhanced results obtain the highest average value in terms of underwater image quality measures among the comparable methods, illustrating that our approach achieves superior performance on different levels of distorted and hazy images.

  • Research Article
  • 10.3390/jmse13081531
A Multi-Scale Contextual Fusion Residual Network for Underwater Image Enhancement
  • Aug 9, 2025
  • Journal of Marine Science and Engineering
  • Chenye Lu + 3 more

Underwater image enhancement (UIE) is a key technology in the fields of underwater robot navigation, marine resources development, and ecological environment monitoring. Due to the absorption and scattering of different wavelengths of light in water, the quality of the original underwater images usually deteriorates. In recent years, UIE methods based on deep neural networks have made significant progress, but there still exist some problems, such as insufficient local detail recovery and difficulty in effectively capturing multi-scale contextual information. To solve the above problems, a Multi-Scale Contextual Fusion Residual Network (MCFR-Net) for underwater image enhancement is proposed in this paper. Firstly, we propose an Adaptive Feature Aggregation Enhancement (AFAE) module, which adaptively strengthens the key regions in the input images and improves the feature expression ability by fusing multi-scale convolutional features and a self-attention mechanism. Secondly, we design a Residual Dual Attention Module (RDAM), which captures and strengthens features in key regions through twice self-attention calculation and residual connection, while effectively retaining the original information. Thirdly, a Multi-Scale Feature Fusion Decoding (MFFD) module is designed to obtain rich contexts at multiple scales, improving the model’s understanding of details and global features. We conducted extensive experiments on four datasets, and the results show that MCFR-Net effectively improves the visual quality of underwater images and outperforms many existing methods in both full-reference and no-reference metrics. Compared with the existing methods, the proposed MCFR-Net can not only capture the local details and global contexts more comprehensively, but also show obvious advantages in visual quality and generalization performance. It provides a new technical route and benchmark for subsequent research in the field of underwater vision processing, which has important academic and application values.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/oceans47191.2022.9977320
Underwater Image Restoration Based on Local Depth Information Prior
  • Oct 17, 2022
  • Jun Hou + 5 more

Underwater images often suffer from color distortion and loss of contrast. This is due to the absorption and scattering of light as it travels through water. Although the physical process of underwater imaging is similar to that of haze images in the air. However, traditional dehazing methods cannot produce good results due to the different attenuation of light under different wavelengths in underwater conditions. To overcome this problem, we propose a novel underwater image restoration method based on local depth information priors. First, we use a computer vision-based multi-view geometry method to estimate the local depth information of the image for parameter estimation of the depth compensation model. According to the characteristics of underwater optical imaging, we introduce an underwater color correction method using depth compensation. Second, we propose a method for estimating the global depth image with local depth information priors. Finally, we adopt the global depth image to recover the underwater image. Experimental results demonstrate that the recovered images can achieve better visual quality of underwater images compared to several state-of-the-art methods.

  • Conference Article
  • Cite Count Icon 15
  • 10.1109/oceanse.2017.8084916
Underwater image restoration using color correction and non-local prior
  • Jun 1, 2017
  • Meng Wu + 3 more

Underwater images often suffer from color and contrast degradation, because the light is absorbed and scattered while traveling in water. Although the physical process of the underwater images seems similar to the outdoor haze images, conventional dehazing methods fail to generate accurate results since colors associated to different wavelengths have different attenuation rates in underwater conditions. To overcome this, we propose a novel underwater image restoration method based on color correction and image dehazing. First, we estimate the global background light using a hierarchical search based on quad-tree subdivision combined with the ocean optical properties. According to the properties of underwater optical imaging, we then introduce an underwater color correction method using depth compensation, in which a multi-channel guided image filter is proposed to refine the depth image. Finally, we adopt the non-local image dehazing algorithm to restore the underwater images. Experimental results demonstrate that the restored images can achieve better visual quality of underwater images when compared with several state-of-the-art methods.

  • Research Article
  • Cite Count Icon 15
  • 10.1049/ipr2.12433
Underwater image enhancement with latent consistency learning‐based color transfer
  • Feb 1, 2022
  • IET Image Processing
  • Hua Yang + 4 more

Due to the inevitable wavelength‐dependent light absorption and forward/backward scattering, underwater images usually suffer severe color distortion and are hazy. It has become quite necessary to improve the visual quality of underwater images for both underwater observation and operation. Traditional enhancement methods and existing deep learning‐based approaches to underwater image enhancement usually produce unsatisfactory results for photographs taken in complicated, wild underwater scenes. In such scenes, complex and diverse degradation‐enhancement mappings are often difficult to model, especially since there are very limited samples available for learning. Inspired by the success of color‐transfer techniques, it is found that clear template image‐assisted color transfer is a promising strategy for underwater image enhancement, including not only color correction but also contrast and visibility improvement. Therefore, instead of directly learning the complex deep enhancement models, it is proposed to select proper color‐transfer templates by learning the latent consistency between the templates and the raw underwater images. The proposed new enhancement strategy alleviates the problem caused by incomplete color‐correction models and provides more stable enhancements by utilizing color transfer with consideration of global color distribution consistency and local visual contrast. Comprehensive experiments conducted on UIEB, RUIE, URPC and SQUID datasets demonstrate the good performance and great potential of the proposed new underwater image enhancement strategy.

  • Research Article
  • Cite Count Icon 1160
  • 10.1109/joe.2015.2469915
Human-Visual-System-Inspired Underwater Image Quality Measures
  • Jul 1, 2016
  • IEEE Journal of Oceanic Engineering
  • Karen Panetta + 2 more

Underwater images suffer from blurring effects, low contrast, and grayed out colors due to the absorption and scattering effects under the water. Many image enhancement algorithms for improving the visual quality of underwater images have been developed. Unfortunately, no well-accepted objective measure exists that can evaluate the quality of underwater images similar to human perception. Predominant underwater image processing algorithms use either a subjective evaluation, which is time consuming and biased, or a generic image quality measure, which fails to consider the properties of underwater images. To address this problem, a new nonreference underwater image quality measure (UIQM) is presented in this paper. The UIQM comprises three underwater image attribute measures: the underwater image colorfulness measure (UICM), the underwater image sharpness measure (UISM), and the underwater image contrast measure (UIConM). Each attribute is selected for evaluating one aspect of the underwater image degradation, and each presented attribute measure is inspired by the properties of human visual systems (HVSs). The experimental results demonstrate that the measures effectively evaluate the underwater image quality in accordance with the human perceptions. These measures are also used on the AirAsia 8501 wreckage images to show their importance in practical applications.

  • Research Article
  • Cite Count Icon 7
  • 10.1364/oe.463865
Multi-prior underwater image restoration method via adaptive transmission.
  • Jun 21, 2022
  • Optics Express
  • Wenyi Ge + 3 more

Captured underwater images usually suffer from severe color cast and low contrast due to wavelength-dependent light absorption and scattering. These degradation issues affect the accuracy of target detection and visual understanding. The underwater image formation model is widely used to improve the visual quality of underwater images. Accurate transmission map and background light estimation are the keys to obtaining clear images. We develop a multi-priors underwater image restoration method with adaptive transmission (MUAT). Concretely, we first propose a calculation method of the dominant channel transmission to cope with pixel interference, which combines two priors of the difference between atmospheric light and pixel values and the difference between the red channel and the blue-green channel. Besides, the attenuation ratio between the superior and inferior channels is adaptively calculated with the background light to solve the color distortion and detail blur caused by the imaging distance. Ultimately, the global white balance method is introduced to solve the color distortion. Experiments on several underwater scene images show that our method obtains accurate transmission and yields better visual results than state-of-the-art methods.

  • Research Article
  • Cite Count Icon 22
  • 10.1016/j.knosys.2022.109997
Degradation-aware and color-corrected network for underwater image enhancement
  • Oct 17, 2022
  • Knowledge-Based Systems
  • Shibai Yin + 5 more

Degradation-aware and color-corrected network for underwater image enhancement

  • Research Article
  • Cite Count Icon 25
  • 10.1109/lra.2021.3105144
WSUIE: Weakly Supervised Underwater Image Enhancement for Improved Visual Perception
  • Oct 1, 2021
  • IEEE Robotics and Automation Letters
  • Lin Hong + 4 more

Underwater images inevitably suffer from degradation and blur due to the scattering and absorption of light as it propagates through the water, which hinders the development of underwater visual perception. Existing deep underwater image enhancement methods mainly rely on the strong supervision of a large-scale dataset composed of aligned raw/enhanced underwater image pairs for model training. However, aligned image pairs are not available in most underwater scenes. This work aims to address this problem by proposing a novel weakly supervised underwater image enhancement (named WSUIE) method. Firstly, a novel generative adversarial network (GAN)-based architecture is designed to enhance underwater images by unpaired image-to-image transformation from domain <b xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">X</b> (raw underwater images) to domain <b xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Y</b> (arbitrary high-quality images), which alleviates the need for aligned underwater image pairs. Then, a new objective function is formulated by exploring intrinsic depth information of underwater images to increase the depth sensitivity of our method. In addition, a dataset with unaligned image pairs (named UUIE) is provided for the model training. Many qualitative and quantitative evaluations of the WSUIE method are performed on this dataset, and the results show that this method can provide improved visual perception performance while enhancing visual quality of underwater images.

  • Research Article
  • Cite Count Icon 632
  • 10.1109/tip.2021.3076367
Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding.
  • Jan 1, 2021
  • IEEE Transactions on Image Processing
  • Chongyi Li + 5 more

Underwater images suffer from color casts and low contrast due to wavelength- and distance-dependent attenuation and scattering. To solve these two degradation issues, we present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor. Concretely, we first propose a multi-color space encoder network, which enriches the diversity of feature representations by incorporating the characteristics of different color spaces into a unified structure. Coupled with an attention mechanism, the most discriminative features extracted from multiple color spaces are adaptively integrated and highlighted. Inspired by underwater imaging physical models, we design a medium transmission (indicating the percentage of the scene radiance reaching the camera)-guided decoder network to enhance the response of network towards quality-degraded regions. As a result, our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding and the advantages of both physical model-based and learning-based methods. Extensive experiments demonstrate that our Ucolor achieves superior performance against state-of-the-art methods in terms of both visual quality and quantitative metrics. The code is publicly available at: https://li-chongyi.github.io/Proj_Ucolor.html.

  • Research Article
  • Cite Count Icon 5
  • 10.1016/j.optlaseng.2024.108590
CCM-Net: Color compensation and coordinate attention guided underwater image enhancement with multi-scale feature aggregation
  • Sep 12, 2024
  • Optics and Lasers in Engineering
  • Li Hong + 5 more

CCM-Net: Color compensation and coordinate attention guided underwater image enhancement with multi-scale feature aggregation

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon