Abstract
Sonar sensors are vital in the marine industry for detecting underwater targets in challenging conditions. The imaging distance and image resolution are negatively correlated due to the propagation characteristics of sound waves in water. Although Super-Resolution (SR) techniques alleviate this limitation, they introduce complex distortions that may not fit the desired utility of reconstructed sonar images. Quantifying image quality is essential, yet existing Image Quality Assessment (IQA) algorithms fail to simultaneously consider reconstruction distortions and the sonar image task background. Furthermore, the scarcity of sonar images poses challenges for deep-learning-based algorithms. To address these issues, we propose a brain-inspired model for Super-Resolution reconstructed Sonar Image Quality Assessment (SRSIQA) based on transfer learning. On the one hand, we adopt an effective feature extractor to extract multi-level features that align with the human brain recognition process. On the other hand, we develop an SEA-Block to implement feature weight adjustment and multi-level feature scale matching, given that some transferred features do not fit the IQA task well. Experimental results demonstrate the superiority of the proposed method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have