Abstract

Underwater images often suffer from quality deteriorations such as color deviation, reduced contrast, and blurred details due to wavelength-dependent light absorption and scattering in water media. Recently, convolutional neural networks (CNNs) have achieved impressive success in underwater image enhancement (UIE). However, there is still room for improvement in terms of representational capability and receptive field (RF)/channel variant ability in almost all CNN-based UIE networks. To treat these problems, we propose an attention-guided dynamic multibranch neural network (ADMNNet) to obtain high-quality underwater images. Different from existing CNN-based UIE networks that generally share a fixed RF size of artificial neurons in one feature layer, we propose an attention-guided dynamic multibranch block (ADMB) to boost the diversity of feature representations by merging the properties of different RFs into a single-stream structure. Concretely, ADMB includes two main components, namely, a dynamic feature selection module (DFSM) and a multiscale channel attention module (MCAM). Inspired by the selective kernel mechanism in the visual cortex, we incorporate a nonlinear strategy into the DFSM that allows the neurons to adjust their RF sizes reasonably by using soft attention to achieve dynamic fusion of multiscale features. In the MCAM, channel attention is designed to exploit the interdependencies among the channelwise features extracted from different branches. Our ADMNNet can obtain better visual quality of underwater images captured under diverse scenarios and achieves superior qualitative and quantitative performance compared to state-of-the-art UIE methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call