Recently, learning-based underwater enhancement (UIE) methods have made considerable progress, significantly benefiting downstream tasks such as underwater semantic segmentation and underwater depth estimation. Most existing unsupervised UIE methods utilize the atmospheric image formation model to decompose underwater images into background color, transmission map, and scene radiance. However, they rely on simplified physical models for estimating the transmission map, over-simplifying its complex formation, which results in imprecise modeling of underwater scattering effects. Additionally, supervised UIE methods heavily depend on synthetic data or ground truth, leading to limited generalization capabilities due to the substantial domain gap presented in different underwater scenarios. To tackle these challenges, we propose a Learnable physical model-guided unsupervised domain adaptation framework for Underwater Image Enhancement, dubbed LUIE. LUIE learns to predict background light, depth, and scene radiance from an underwater image. We incorporate a learnable network to estimate the transmission map based on the predicted depth map. To minimize the inter-domain gap between synthetic and real underwater images, we introduce a bi-directional domain adaptation method that alternates the background light from each domain. Experimental results demonstrate the effectiveness of our proposed method compared to existing approaches, and high-level experiment results validate that our enhanced underwater results. Experiments in real-world settings on underwater ROVs platform with NVIDIA Jetson AGX Xavier further confirm the effectiveness and efficiency of our work.
Read full abstract