Abstract

In this paper, to solve unique underwater degradation challenges covering low contrast, color deviation and blurring, etc., a novel semantic attention and relative scene depth-guided network (SARSDN) for underwater image enhancement is proposed. Main contributions are as follows: (1) By combining with diverse characteristics of red–green–blue, hue-saturation-value and Lab spaces, the multi-color space feature representation network (MSFRN) is elaborately developed, such that domain shifting can be effectively alleviated; (2) By utilizing position attention and devising multi-dilated-convolution depth perception unit, the underwater relative scene depth estimation network (URSDEN) is proposed to adapt attention weights to regions with different degrees of degradation, thereby exclusively accommodating scene depth-dependent attenuation and scattering; (3) The underwater scene semantic segmentation network (USSSN) is devised to estimate semantic attention map for reducing artifacts and increasing integrity of foreground objects during underwater image enhancement by virtue of encoder–decoder framework with deformable convolution network; and (4) The entire SARSDN scheme is ultimately created in a modular manner by integrating MSFRN, URSDEN and USSSN modules. Comprehensive experiments and comparisons thoroughly illustrate that the developed SARSDN framework outperforms typical underwater image enhancement approaches from both subjective and objective aspects, where UIQM scores are 0.8263, 0.9393, 1.1817, 0.5289, 0.6517, 0.5393, 0.7917 and 0.4651 higher than those of IBLA, ULAP, HLRP, UCM, RGHS, MLLE, UGAN and FUnIE-GAN schemes, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call