Underwater images collected are often of low clarity and suffer from severe color distortion due to the marine environment and Illumination conditions. This directly impacts tasks such as marine ecological monitoring and underwater target detection, which rely on image processing. Therefore, enhancing Underwater images to improve their quality is necessary. A generative adversarial network with an encoder-decoder structure is proposed to improve the quality of Underwater images. The network consists of a generative network and an adversarial network. The generative network is responsible for enhancing the images, while the adversarial network determines whether the input is an enhanced image or a real high-quality image. In the generative network, we first design a residual convolution module to extract more texture and edge information from underwater images. Next, we design a multi-scale dilated convolution module to capture underwater features at different scales. Then, we design a feature fusion adaptive attention module to reduce the interference of redundant features and enhance the local perception capabilities. Finally, we construct the generative network using these modules along with conventional modules. In the adversarial network, we first design a multi-scale feature extraction module to improve the feature extraction ability. We then use the multi-scale feature extraction module along with conventional convolution modules to design the adversarial network. Additionally, we propose an improved loss function by introducing color loss into the conventional loss function. The improved loss function can better measure the color discrepancy between the enhanced image and the real image. It is useful to reduce color distortion in the enhanced images. In experimental simulations, the images enhanced by the proposed methods have the highest PSNR, SSIM, and UIQM values, indicating that the proposed method has superior Underwater image enhancement capabilities compared to other methods.
Read full abstract