Abstract
Saliency Object Detection (SOD) models driven by the biologically-inspired Focus of Attention (FOA) mechanism can result in highly accurate saliency maps. However, their application in high-resolution Synthetic Aperture Radar (SAR) images entails a number of intractable problems due to complex backgrounds. In this paper, we propose a novel hierarchical self-diffusion saliency (HSDS) method for detecting vehicle targets in large scale SAR images. To reduce the influence of cluttered returns on saliency analysis, we learn a weight vector from the training set to capture optimal initial saliency of the superpixels during saliency diffusion. By accounting for the multiple sizes of background objects, the saliency analysis is implemented in multi-scale space, and a saliency fusion strategy employed to integrate the multi-scale saliency maps. Simulation experiments demonstrate that our proposed method can produce a more accurate and stable detection performance, with decreased false alarms, compared to benchmark approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.