Abstract

: Clear images are a prerequisite of high-level underwater vision tasks, but images captured underwater are often degraded due to absorption and scattering of light. To solve this issue, traditional methods have shown some success, but often generate unwanted artifacts for knowledge priori dependency. In contrast, learning-based approaches can produce more refined results. Most popular methods are based on an encoder-decoder configuration for simply learning the nonlinear transformation of input and output images, so their ability to capture details is limited. In addition, the significant pixel-level features and multi-scale features are often overlooked. Accordingly, we propose a novel and efficient network that incorporates triple attention and a multi-scale pyramid with an encoder-decoder architecture. Specifically, a triple attention module that captures the channel-pixel-spatial features is used as the transformation of the encoder-decoder module to focus on the fog region; then, a multi-scale pyramid module designed for refining the preliminary defog results are used to improve the restoration visibility. Extensive experiments on the EUVP and UFO-120 datasets corroborate that the proposed method outperforms the state-of-the-art methods in quantitative metrics Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), Patch-based Contrast Quality Index (PCQI) and visual quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call