Abstract

Synthetic aperture radar (SAR) image segmentation aims at generating homogeneous regions from a pixel-based image and is the basis of image interpretation. However, most of the existing segmentation methods usually neglect the appearance and spatial consistency during feature extraction and also require a large number of training data. In addition, pixel-based processing cannot meet the real time requirement. We hereby present a weakly supervised algorithm to perform the task of segmentation for high-resolution SAR images. For effective segmentation, the input image is first over-segmented into a set of primitive superpixels. This algorithm combines hierarchical conditional generative adversarial nets (CGAN) and conditional random fields (CRF). The CGAN-based networks can leverage abundant unlabeled data learning parameters, reducing their reliance on the labeled samples. In order to preserve neighborhood consistency in the feature extraction stage, the hierarchical CGAN is composed of two sub-networks, which are employed to extract the information of the central superpixels and the corresponding background superpixels, respectively. Afterwards, CRF is utilized to perform label optimization using the concatenated features. Quantified experiments on an airborne SAR image dataset prove that the proposed method can effectively learn feature representations and achieve competitive accuracy to the state-of-the-art segmentation approaches. More specifically, our algorithm has a higher Cohen’s kappa coefficient and overall accuracy. Its computation time is less than the current mainstream pixel-level semantic segmentation networks.

Highlights

  • The latest technology of synthetic aperture radar (SAR) imaging sensors can achieve all-day and high-resolution imaging for various geographical terrains [1,2]

  • (1) To improve the SAR image segmentation performance with insufficient labeled samples, we introduce the conditional generative adversarial nets (CGAN) in conditional random fields (CRF)-based segmentation method

  • DaTthaeDSeAscRridpatitoanbase used in our experiment contains the imaging results of FangChengGang in GuaTnhgexiSPArRovdinactaeb, Casheinuase[4d4]i.nTohuerimexapgeinrigmreanntgceoonfttahiinssdtahteabimasaegisinagboruestu3l0ts×o3f0FkamngwChitehnagrGesaonlgutiinon Goufa2ngmx,i aPnrdoviminacge,e Csihzeinias 1[4142]2. ×Th1e41i9mpaigxienlgs. rTahnegree aorfetthoitsaldlyat3a6baimseagisesaibnotuhte3d0at×as3e0t aknmd sweivtehnaof retshoelmutiaorne osefl2ecmte,danads tihmeatgraeisniiznegisse1t1. 2W2e×m14a1n9upailxlyelasn

Read more

Summary

Introduction

The latest technology of synthetic aperture radar (SAR) imaging sensors can achieve all-day and high-resolution imaging for various geographical terrains [1,2]. SAR image segmentation aims at assigning optimal labels to the pixels and is considered as a foundation for many high-level interpretation tasks. Accurate segmentation can greatly reduce the difficulty of subsequent advanced tasks (target detection, recognition [3], tracking, change detection [4], etc.). Unlike the common classification framework, where classifiers (e.g., support vector machine (SVM) [5], random forest (RF) [6], sparse representation [7]) are generally used to assign a discrete or continuous label to each unit, segmentation models need to preserve neighborhood consistency. In the segmentation framework, if the neighbors of a pixel are oceans, the confidence that it belongs to ocean areas should increase. TTreasTitnrain PerceTnet st Background 2,1273776,27,442148 1,3166.48198% Non-Image

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.