Abstract

ABSTRACTSnow cover is of great significance for many applications. However, automatic extraction of snow cover from high spatial resolution remote sensing (HSRRS) imagery remains challenging, owing to its multiscale characteristics, similarities to clouds, and occlusion by the shadows of mountains and clouds. Deep convolutional neural networks for semantic segmentation are the most popular approach to automatic map generation, but they require huge computing time and resources, as well as a large dataset of pixel-wise annotated HSRRS images, which precludes the application of many superior models. In this study, these limitations are overcome by using a sequence of transfer learning steps. The method starts with a modified aligned ‘Xception’ model pre-trained for object classification on ImageNet. Subsequently, a ‘DeepLab version three plus’ (DeepLabv3+) model is trained using a large dataset of Landsat images and corresponding snow cover products. Finally, a second transfer learning step is employed to fine-tune the model on the small dataset of GaoFen-2, the highest resolution HSRRS satellite in China. Experiments demonstrate the feasibility and effectiveness of this framework for automatic snow cover extraction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call