Abstract

Underwater images suffer from color casts and low illumination due to the scattering and absorption of light as it propagates in water. These problems can interfere with underwater vision tasks, such as recognition and detection. We propose an adaptive learning attention network for underwater image enhancement based on supervised learning, named LANet, to solve these degradation issues. First, a multiscale fusion module is proposed to combine different spatial information. Second, we design a novel parallel attention module(PAM) to focus on the illuminated features and more significant color information coupled with the pixel and channel attention. Then, an adaptive learning module(ALM) can retain the shallow information and adaptively learn important feature information. Further, we utilize a multinomial loss function that is formed by mean absolute error and perceptual loss. Finally, we introduce an asynchronous training mode to promote the network&#x2019;s performance of multinomial loss function. Qualitative analysis and quantitative evaluations show the excellent performance of our method on different underwater datasets. The code is available at: <uri>https://github.com/LiuShiBen/LANet</uri>.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.