Abstract

Synthetic Aperture Radar (SAR) imagery have been one of the important tools to support earth observations and topographic measurements. It means SAR imagery are essentially rich in structures and some important target categories are difficult to recognize. Optical imagery contains rich and clear spectral information which has a good influence on semantic image segmentation. The success of deep neural networks for semantic segmentation heavily depends on large-scale and well-labeled data sets, which are hard to collect in practice. In this paper, we consider deep transfer learning for semantic segmentation, we propose a deep novel transfer learning method, which transfers a semantically segment model from SAR imagery to SAR and optical fusion imagery. The experimental results show that the method proposed achieves higher mean Intersection over Union (mIoU) with less training time compared with other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call