Enhancing underwater images and videos by fusion
This paper describes a novel strategy to enhance underwater videos and images. Built on the fusion principles, our strategy derives the inputs and the weight measures only from the degraded version of the image. In order to overcome the limitations of the underwater medium we define two inputs that represent color corrected and contrast enhanced versions of the original underwater image/frame, but also four weight maps that aim to increase the visibility of the distant objects degraded due to the medium scattering and absorption. Our strategy is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. Our fusion framework also supports temporal coherence between adjacent frames by performing an effective edge preserving noise reduction strategy. The enhanced images and videos are characterized by reduced noise level, better exposed-ness of the dark regions, improved global contrast while the finest details and edges are enhanced significantly. In addition, the utility of our enhancing technique is proved for several challenging applications.
- Conference Article
- 10.1109/aicit55386.2022.9930307
- Sep 16, 2022
This paper describes an improved algorithm for de-fogging based on fusion underwater images. Based on the fusion principle, our algorithm only needs to obtain its input map and weight map through the original degraded image. To overcome the limitations of underwater media, we define two inputs, representing color correction and contrast enhancement of the original underwater image, and four weights, which aim to enhance distant objects degraded by medium scattering and absorption visibility. Our method is a single-image method and does not require specialized hardware or knowledge about underwater conditions or scene structure. Our fusion framework also supports temporal correlation between adjacent images by applying an efficient edge denoising strategy. The enhanced image features reduced noise levels, improved exposure in dark areas, and increased overall contrast, while significantly enhancing the finest details and edges.
- Research Article
132
- 10.1109/tip.2019.2919947
- Jun 10, 2019
- IEEE Transactions on Image Processing
We propose an underwater image enhancement model inspired by the morphology and function of the teleost fish retina. We aim to solve the problems of underwater image degradation raised by the blurring and nonuniform color biasing. In particular, the feedback from color-sensitive horizontal cells to cones and a red channel compensation are used to correct the nonuniform color bias. The center-surround opponent mechanism of the bipolar cells and the feedback from amacrine cells to interplexiform cells then to horizontal cells serve to enhance the edges and contrasts of the output image. The ganglion cells with color-opponent mechanism are used for color enhancement and color correction. Finally, we adopt a luminance-based fusion strategy to reconstruct the enhanced image from the outputs of ON and OFF pathways of fish retina. Our model utilizes the global statistics (i.e., image contrast) to automatically guide the design of each low-level filter, which realizes the self-adaption of the main parameters. Extensive qualitative and quantitative evaluations on various underwater scenes validate the competitive performance of our technique. Our model also significantly improves the accuracy of transmission map estimation and local feature point matching using the underwater image. Our method is a single image approach that does not require the specialized prior about the underwater condition or scene structure.
- Research Article
952
- 10.1109/tip.2017.2759252
- Oct 5, 2017
- IEEE Transactions on Image Processing
We introduce an effective technique to enhance the images captured underwater and degraded due to the medium scattering and absorption. Our method is a single image approach that does not require specialized hardware or knowledge about the underwater conditions or scene structure. It builds on the blending of two images that are directly derived from a color-compensated and white-balanced version of the original degraded image. The two images to fusion, as well as their associated weight maps, are defined to promote the transfer of edges and color contrast to the output image. To avoid that the sharp weight map transitions create artifacts in the low frequency components of the reconstructed image, we also adapt a multiscale fusion strategy. Our extensive qualitative and quantitative evaluation reveals that our enhanced images and videos are characterized by better exposedness of the dark regions, improved global contrast, and edges sharpness. Our validation also proves that our algorithm is reasonably independent of the camera settings, and improves the accuracy of several image processing applications, such as image segmentation and keypoint matching.
- Conference Article
1
- 10.1109/oceans47191.2022.9977320
- Oct 17, 2022
Underwater images often suffer from color distortion and loss of contrast. This is due to the absorption and scattering of light as it travels through water. Although the physical process of underwater imaging is similar to that of haze images in the air. However, traditional dehazing methods cannot produce good results due to the different attenuation of light under different wavelengths in underwater conditions. To overcome this problem, we propose a novel underwater image restoration method based on local depth information priors. First, we use a computer vision-based multi-view geometry method to estimate the local depth information of the image for parameter estimation of the depth compensation model. According to the characteristics of underwater optical imaging, we introduce an underwater color correction method using depth compensation. Second, we propose a method for estimating the global depth image with local depth information priors. Finally, we adopt the global depth image to recover the underwater image. Experimental results demonstrate that the recovered images can achieve better visual quality of underwater images compared to several state-of-the-art methods.
- Conference Article
15
- 10.1109/oceanse.2017.8084916
- Jun 1, 2017
Underwater images often suffer from color and contrast degradation, because the light is absorbed and scattered while traveling in water. Although the physical process of the underwater images seems similar to the outdoor haze images, conventional dehazing methods fail to generate accurate results since colors associated to different wavelengths have different attenuation rates in underwater conditions. To overcome this, we propose a novel underwater image restoration method based on color correction and image dehazing. First, we estimate the global background light using a hierarchical search based on quad-tree subdivision combined with the ocean optical properties. According to the properties of underwater optical imaging, we then introduce an underwater color correction method using depth compensation, in which a multi-channel guided image filter is proposed to refine the depth image. Finally, we adopt the non-local image dehazing algorithm to restore the underwater images. Experimental results demonstrate that the restored images can achieve better visual quality of underwater images when compared with several state-of-the-art methods.
- Conference Article
18
- 10.1117/12.852339
- Apr 23, 2010
The main challenge in underwater imaging and image analysis is to overcome the effects of blurring due to the strong scattering of light by the water and its constituents. This blurring adds complexity to already challenging problems like object detection and localization. The current state-of-the-art approaches for object detection and localization normally involve two components: (a) a feature detector that extracts a set of feature points from an image, and (b) a feature matching algorithm that tries to match the feature points detected from a target image to a set of template features corresponding to the object of interest. A successful feature matching indicates that the target image also contains the object of interest. For underwater images, the target image is taken in underwater conditions while the template features are usually extracted from one or more training images that are taken out-of-water or in different underwater conditions. In addition, the objects in the target image and the training images may show different poses, including rotation, scaling, translation transformations, and perspective changes. In this paper we investigate the effects of various underwater point spread functions on the detection of image features using many different feature detectors, and how these functions affect the capability of these features when they are used for matching and object detection. This research provides insight to further develop robust feature detectors and matching algorithms that are suitable for detecting and localizing objects from underwater images.
- Book Chapter
- 10.1007/978-3-030-57881-7_64
- Jan 1, 2020
Underwater optical images often suffer from color cast, edge-blurring and low contrast due to the medium absorption and scattering in water. To solve these problems, we propose an effective technique to improve underwater image quality. First, we introduce an effective color balance strategy based on affine transform to address the color distortion. Then we convert the underwater image from RGB color space to CIE-Lab color space for contrast improvement. In ‘L’ component’s nonsubsampled contourlet transform (NSCT) domain, global contrast adjustment and multi-scale edge sharpening are conducted respectively for lowpass and bandpass direction subbands. Finally, a color-corrected and contrast-enhanced output image can be generated by inverse NSCT and conversion back to RGB color space. The propose method is a single image approach that does not require prior knowledge about the underwater imaging conditions. Experimental results show that our method outperforms state-of-the-art methods both in qualitative and quantitative evaluation. It generally results in good perceptual quality, with significant enhancement of the global contrast, the color, and the image structure details.
- Conference Article
22
- 10.1109/ispa.2015.7306031
- Sep 1, 2015
Blurring and color cast are two of the most challenging problems for underwater imaging. The poor quality hinders the automatic segmentation or analysis of images. In this paper, we describe an image enhancement method to reduce the blurring and color cast of the underwater medium. It is a two-folded approach; First, a color correction algorithm is applied to correct the color cast and produce a natural appearance of the sub-sea images. Second, a pair of learned dictionaries based on sparse representation are applied to sharpen the image and enhance the details. Our strategy is a single image approach that does not require additional knowledge of environment such as depth, distance object/camera or water quality. The experimental results show that the proposed method can efficiently enhance almost every underwater image; And offers a quality that is typically sufficient for the high level computer vision algorithms.
- Conference Article
30
- 10.1109/iccic.2016.7919711
- Dec 1, 2016
Degradation of underwater images is an atmospheric phenomenon which is a result of scattering and absorption of light. In this paper, we have defined a fusion based approach to enhance the visibility of underwater images. Our method uses only one single hazy image to derive the contrast improved and colour corrected versions of the original image. Further, it removes the distortion and uplifts the visibility of the distant objects in the image by applying weight maps on each of the derived inputs. We have used multi-scale fusion technique to blend the inputs and weight maps together, ensuring that each fused image contributes its most significant feature into the final image. Our technique is simple and straightforward that effectively contribute in enhancing the quality and appearance of underwater hazy images.
- Conference Article
37
- 10.1109/icinfa.2016.7831889
- Aug 1, 2016
Over the last few decades underwater image processing has received considerable attention due to its challenging nature and its importance. The quality of underwater images is worse than that of images shot in the air and images usually appear foggy and hazy. In this paper a novel algorithm is proposed for underwater image enhancement. Our algorithm is based on a single degraded underwater image, which does not require specialized hardware and any knowledge about the underwater conditions. Our algorithm comprises a combination of classical contrast enhancement techniques and adaptive histogram equalization techniques. We present an evaluation of the proposed algorithm and other approaches on real underwater images. Comprehensive validation experiments performed on real underwater images reveal that the proposed method performs better than the current state-of-the-art.
- Research Article
9
- 10.1109/access.2019.2945576
- Jan 1, 2019
- IEEE Access
Integration of ocean monitoring networks with artificial intelligence has become a popular topic for researchers. Artificial intelligence plays an important role in underwater image processing. For optical images captured in an underwater environment, the light scattering and absorption effect caused by the water medium results in poor visibility, such as blur and color casts. A novel approach is proposed herein to enhance the single underwater image with poor visibility. Similar to other image enhancement strategies built on fusion principles, our method also generates two input channels from the original degraded image, and these two channels are modulated by their corresponding weight measures. However, the main innovation of our method is that we propose a new multilevel decomposition approach based on I p -norm (ρ = 0, 1, 2) decomposition. According to the different sparse representation abilities of I p -norm to an image's spatial information, our approach decomposes the image into three levels: detail level, structure level, and illuminance level. Thus, these three levels can be manipulated separately. Because this new decomposition approach is based on image structural contents, rather than direct per-pixel downsampling that is utilized in traditional multi-resolution pyramid decomposition, it is more accurate and flexible. Additionally, according to specific underwater imaging conditions, we carefully select two input channels and their three associated global contrast, local contrast, and saliency weight measures. Our method generates output with more accurate details and a better illuminant dynamic range. Generally, we are the first to impose an I p -norm-based decomposition strategy on underwater image restoration and enhancement. Extensive qualitative and quantitative evaluations demonstrate that our strategy yields better results than state-of-the-art algorithms.
- Research Article
1
- 10.1088/1757-899x/1099/1/012063
- Mar 1, 2021
- IOP Conference Series: Materials Science and Engineering
We have investigated the problem of underwater hazy image enhancement and restoration in this paper studied. Underwater image processing has several applications in the field of oceanic research work and scientific applications such as archaeology, geology, underwater environmental assessment, laying of long distance gas pipelines and communication links across the continents which demand geo-referential surveying of the oceanic bed and prospection of ancient shipwreck. There are many difficulties for undersea optical imaging. To submerging a camera in underwater enough space is required. The maneuvering of the camera with the help from remote place or in person at the site is likewise a complex task. However, the major challenge is imposed by underwater medium properties. Underwater haze image enhancement has gained widespread importance with the rapid development of modern imaging equipment. Though, the contrast enhancement of single underwater hazy image is a cumbersome task for scientific exploration and computational application. At extreme depth, because of attenuation in light propagation, the underwater images are susceptible to inferior visibility.
- Research Article
96
- 10.1016/j.jvcir.2016.03.029
- Apr 1, 2016
- Journal of Visual Communication and Image Representation
Underwater image enhancement method using weighted guided trigonometric filtering and artificial light correction
- Research Article
1
- 10.1364/oe.538120
- Oct 22, 2024
- Optics express
Due to the scattering and absorption of light, underwater images often exhibit degradation. Given the scarcity of paired real-world data and the inability of synthetic paired data to perfectly approximate real-world data, it's a challenge to restore these degraded images using deep neural networks. In this paper, a zero-shot underwater multi-scale image enhancement method (Zero-UMSIE) is proposed, which utilizes the isomorphism between the original underwater image and the re-degraded image. Specifically, Zero-UMSIE first estimates three latent components of the original underwater image: the global background light, the transmission map, and the scene radiance. Then, the estimated scene radiance is randomly mixed with the original underwater image to generate re-degraded images. Finally, a multi-scale loss and a set of tailored non-reference loss functions are employed to fine-tune the underwater image and enhance the generalization ability of the network. These functions implicitly control the learning preferences of the network and effectively address issues such as color bias and uneven illumination in underwater images, without the need for additional datasets. The proposed method is evaluated on three widely used real-world underwater image datasets. Extensive experiments on various benchmarks demonstrate that the proposed method is superior to state-of-the-art methods subjectively and objectively, which is competitive and applicable to diverse underwater conditions.
- Research Article
4
- 10.1007/s10846-024-02065-8
- Feb 14, 2024
- Journal of Intelligent & Robotic Systems
In this paper we propose a learning-based restoration approach to learn the optimal parameters for enhancing the quality of different types of underwater images and apply a set of intensity transformation techniques to process raw underwater images. The methodology comprises two steps. Firstly, a Convolutional Neural Network (CNN) Regression model is employed to learn enhancing parameters for each underwater image type. Trained on a diverse dataset, the CNN captures complex relationships, enabling generalization to various underwater conditions. Secondly, we apply intensity transformation techniques to raw underwater images. These transformations collectively compensate for visual information loss due to underwater degradation, enhancing overall image quality. In order to evaluate the performance of our proposed approach, we conducted qualitative and quantitative experiments using well-known underwater image datasets (U45 and UIEB), and using the proposed challenging dataset composed by 276 underwater images from the Amazon region (AUID). The results demonstrate that our approach achieves an impressive accuracy rate in different underwater image datasets. For U45 and UIEB datasets, regarding PSNR and SSIM quality metrics, we achieved 26.967, 0.847, 27.299 and 0.793, respectively. Meanwhile, the best comparison techniques achieved 26.879, 0.831, 27.157 and 0.788, respectively.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.