In recent years, with the increasingly serious problems of resource shortage and environmental pollution, the exploration and development of underwater clean energy were particularly important. At the same time, abundant underwater resources and species have attracted a large number of scientists to carry out research on underwater-related tasks. Due to the diversity and complexity of underwater environments, it is difficult to perform related vision tasks, such as underwater target detection and capture. The development of digital image technology has been relatively mature, and it has been applied in many fields and achieved remarkable results, but the research on underwater image processing technology is rarely effective. The underwater environment is much more complicated than that on land, and there is no light source underwater. Underwater imaging systems must rely on artificial light sources for illumination. When light travels through water, it is severely attenuated by water absorption, reflection, and scattering. The collected underwater images inevitably have problems such as limited visible range, blur, low contrast, uneven illumination, incoherent colors, and noise. The purpose of image enhancement is to improve or solve one or more of the above problems in a targeted manner. Therefore, underwater image enhancement technology has become one of the key contents of underwater image processing technology research. In this paper, we proposed a conditional generative adversarial network model based on attention U-Net which contains an attention gate mechanism that could filter invalid feature information and capture contour, local texture, and style information effectively. Furthermore, we formulate an objective function through three different loss functions, which can evaluate image quality from global content, color, and structural information. Finally, we performed end-to-end training on the UIEB real-world underwater image dataset. The comparison experiments show that our method outperforms all comparative methods, the ablation experiments show that the loss function proposed in this paper outperforms a single loss function, and finally, the generalizability of our method is verified by executing on two different datasets, UIEB and EUVP.
Read full abstract