Abstract
Due to the absorption and scattering of light on the water surface, underwater images often face challenges such as low contrast, color deviation, insufficient exposure, and blurred details, exacerbating the difficulty of underwater tasks. In recent years, underwater image enhancement has become increasingly crucial in marine applications. Among existing underwater enhancement methods, the focus has often been on pixel-level learning, which may lead to image noise and an inability to finely adjust the image. In this paper, we propose a dual-branch Transformer-CNN Parameter Filtering network for underwater image enhancement, referred to as DTCPF. Specifically, to better aggregate window information, we introduce an overlapping window self-attention module to enhance interaction between windows. Additionally, we employ an improved Transformer encoder and decoder, utilizing long-distance attention and reversible neural networks to extract low-frequency and high-frequency information from the image. Moreover, we introduce a regression parameter filtering group for regression prediction, using the predicted parameters to enhance the image and obtain a reliable underwater enhancement model. Our approach undergoes qualitative and quantitative analyses on four real underwater datasets, demonstrating outstanding performance.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Visual Communication and Image Representation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.