Abstract

ABSTRACT Deep learning-based semantic segmentation methods, such as fully convolutional networks (FCNs), are state-of-the-art techniques for object extraction from high spatial resolution images. However, collecting massive scene-formed training samples typically required in FCNs is time-consuming and labour-intensive. A suit of automatic sample augmentation schemes based on simulated scene generation is proposed in this study to reduce the manual workload. Proposed schemes include style transfer, target embedding, and mixed modes by utilizing techniques, such as texture transfer, image inpainting, and region-line primitive association framework, which automatically expand the sample set on the basis of a small number of real samples. Dock extraction experiments using UNet are conducted on China’s GaoFen-2 imagery with expanded sample sets. Results showed that the proposed schemes can successfully generate sufficient simulation samples, increase sample diversity, and subsequently improve semantic segmentation accuracy. Compared with the results that use the original real sample set, measures of F 1-score (F 1) and intersection over union (IoU) of dock extraction accuracy demonstrate a maximum improvement of 20.53% and 23.01%, respectively, after sample augmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call