Super-resolution (SR) is a technique that restores image details based on existing information, enhancing the resolution of images to prevent quality degradation. Despite significant achievements in deep-learning-based SR models, their application in underwater sonar scenarios is limited due to the lack of underwater sonar datasets and the difficulty in recovering texture details. To address these challenges, we propose a multi-scale generative adversarial network (SIGAN) for super-resolution reconstruction of underwater sonar images. The generator is built on a residual dense network (RDN), which extracts rich local features through densely connected convolutional layers. Additionally, a Convolutional Block Attention Module (CBAM) is incorporated to capture detailed texture information by focusing on different scales and channels. The discriminator employs a multi-scale discriminative structure, enhancing the detail perception of both generated and high-resolution (HR) images. Considering the increased noise in super-resolved sonar images, our loss function emphasizes the PSNR metric and incorporates the L2 loss function to improve the quality of the output images. Meanwhile, we constructed a dataset for side-scan sonar experiments (DNASI-I). We compared our method with the current state-of-the-art super-resolution image reconstruction methods on the public dataset KLSG-II and our self-built dataset DNASI-I. The experimental results show that at a scale factor of 4, the average PSNR value of our method was 3.5 higher than that of other methods, and the accuracy of target detection using the super-resolution reconstructed images can be improved to 91.4%. Through subjective qualitative comparison and objective quantitative analysis, we demonstrated the effectiveness and superiority of the proposed SIGAN in the super-resolution reconstruction of side-scan sonar images.
Read full abstract