Autonomous underwater vehicles (AUVs) based on visual perception play an important role in maritime operations. However, underwater environment often suffers from poor lighting conditions, making the utilization of artificial light sources necessary. This reliance on artificial lighting frequently results in non-uniform illumination. Furthermore, the absorption and scattering effects of water cause further degradation, such as color distortion and blurring of details. To address these challenges, we propose a pseudo-siamese network, named PSNet, designed for underwater optical image enhancement. PSNet separates the non-uniformly illuminated layer from the optimally uniformly illuminated image and utilizes a cascading iteration strategy to enhance the image details. To achieve a better balance prediction quality, we introduce structure loss and residual reconstruction loss as additional guides for model learning. Additionally, we incorporate a color consistency loss to mitigate color distortion. To address the lack of training data, we develop a non-uniform illumination model and generate a dataset that includes both non-uniformly illuminated layers and uniformly illuminated images. Through comprehensive experimental evaluations, PSNet significantly enhances the visual quality of underwater optical images and consistently outperforms state-of-the-art approaches in multiple performance metrics.
Read full abstract