Abstract

A lack of adequate consideration of underwater image enhancement gives room for more research into the field. The global background light has not been adequately addressed amid the presence of backscattering. This paper presents a technique based on pixel differences between global and local patches in scene depth estimation. The pixel variance is based on green and red, green and blue, and red and blue channels besides the absolute mean intensity functions. The global background light is extracted based on a moving average of the impact of suspended light and the brightest pixels within the image color channels. We introduce the block-greedy algorithm in a novel Convolutional Neural Network (CNN) proposed to normalize different color channels’ attenuation ratios and select regions with the lowest variance. We address the discontinuity associated with underwater images by transforming both local and global pixel values. We minimize energy in the proposed CNN via a novel Markov random field to smooth edges and improve the final underwater image features. A comparison of the performance of the proposed technique against existing state-of-the-art algorithms using entropy, Underwater Color Image Quality Evaluation (UCIQE), Underwater Image Quality Measure (UIQM), Underwater Image Colorfulness Measure (UICM), and Underwater Image Sharpness Measure (UISM) indicate better performance of the proposed approach in terms of average and consistency. As it concerns to averagely, UICM has higher values in the technique than the reference methods, which explainsits higher color balance. The μ values of UCIQE, UISM, and UICM of the proposed method supersede those of the existing techniques. The proposed noted a percent improvement of 0.4%, 4.8%, 9.7%, 5.1% and 7.2% in entropy, UCIQE, UIQM, UICM and UISM respectively compared to the best existing techniques. Consequently, dehazed images have sharp, colorful, and clear features in most images when compared to those resulting from the existing state-of-the-art methods. Stable σ values explain the consistency in visual analysis in terms of sharpness of color and clarity of features in most of the proposed image results when compared with reference methods. Our own assessment shows that only weakness of the proposed technique is that it only applies to underwater images. Future research could seek to establish edge strengthening without color saturation enhancement.

Highlights

  • The rise of the digital world has driven up the consumer market for various applications [1,2,3,4,5,6,7,8,9,10,11,12,13]

  • We propose to implement underwater image dehazing technique via a novel Convolutional Neural Network (CNN)

  • The technique we propose intelligently selects pixels from local and global patches to estimate optimal ambient light for underwater image dehazing

Read more

Summary

Introduction

The rise of the digital world has driven up the consumer market for various applications [1,2,3,4,5,6,7,8,9,10,11,12,13]. Haze removal has gained increased attention [14]. Image dehazing has been extended from outdoor images to underwater images [15]. This is due to the rise of visual data analysis and comprehension to aid various applications. The human brain has a cortical area, which aids in analyzing the visual data [16]. The improved image clarity helps image processing tasks vital for significant applications such as those used for surveillance and in environmental studies

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.