In underwater environments, imaging devices suffer from water turbidity, attenuation of lights, scattering, and particles, leading to low quality, poor contrast, and biased color images. This has led to great challenges for underwater condition monitoring and inspection using conventional vision techniques. In recent years, underwater image enhancement has attracted increasing attention due to its critical role in improving the performance of current computer vision tasks in underwater object detection and segmentation. As existing methods, built mainly from natural scenes, have performance limitations in improving the color richness and distributions we propose a novel deep learning-based approach namely Deep Inception and Channel-wise Attention Modules (DICAM) to enhance the quality, contrast, and color cast of the hazy underwater images. The proposed DICAM model enhances the quality of underwater images, considering both the proportional degradations and non-uniform color cast. Extensive experiments on two publicly available underwater image enhancement datasets have verified the superiority of our proposed model compared with several state-of-the-art conventional and deep learning-based methods in terms of full-reference and reference-free image quality assessment metrics. The source code of our DICAM model is available at https://github.com/hfarhaditolie/DICAM.
Read full abstract