Abstract
The SAR image semantic segmentation using computer vision techniques has gained much popularity in the research community due to their wide applications. Despite the advancements in Deep Learning for image analysis, these models still struggle to segment SAR images due to the existence of speckle noise and a poor feature extractor. Moreover, deep learning models are challenging to train on small datasets and the performance of the model is significantly impacted by the quality of the data. This calls for the development of an effective network that can draw out critical information from the low resolution SAR images. In this regard, the present work proposes a unique Self attention module in U-Net for the semantic segmentation of low resolution SAR images.. The Self Attention Model makes use of Laplacian kernel to highlight the sharp discontinuities in the features that define the boundaries of the objects. The proposed model, employs dilated convolution layers at the initial layers, enabling the model to more effectively capture larger contextual information. With an accuracy of 0.84 and an F1-score of 0.83, the proposed model outperforms the state-of-the-art techniques in semantic segmentation of low resolution SAR images. The results clearly demonstrate the importance of the self attention module and the consideration of dilated convolution layers in the initial layers in semantic segmentation of low resolution SAR images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.