Abstract

Underwater robots have broad applications in many fields such as ocean exploration, ocean pasture and environmental monitoring. However, due to the inference of light scattering and absorption, selective color attenuation, suspended particles and other complex factors in the underwater environment, it is difficult for robot vision sensors to obtain high-quality underwater image signal, which is the bottleneck problem that restricts the visual perception of underwater robots. In this paper, we propose a multi-scale fusion generative adversarial network named Fusion Water-GAN (FW-GAN) to enhance the underwater image quality. The proposed model has four convolution branches, these branches refine the features of the three prior inputs and encode the original input, then fuse prior features using the proposed multi-scale fusion connections, and finally use the channel attention decoder to generate satisfactory enhanced results. We conduct qualitative and quantitative comparison experiments on real-world and synthetic distorted underwater image datasets under various degradation conditions. The results show that compared with the recent state-of-the-art underwater image enhancement methods, our proposed method achieves higher quantitative metrics scores and better generalization capability. In addition, the ablation study demonstrated the contribution of each component.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call