ABSTRACT Public release of the entire Landsat data archive and cloud-based geocomputation platforms have greatly facilitated land-cover mapping worldwide. The acquisition of training samples presents a significant challenge in mapping. In this study, we have developed an approach based on transfer learning to creatively use existing global land-cover products and training samples to resolve the training sample issue. We pretrained a deep neural network (DNN) model using low-quality samples automatically generated from existing global land-cover products. The pretrained model was then fine-tuned using high-quality training samples gathered from the training samples shared by authors of existing studies. We implemented this approach to generate a land-cover map for Beijing City, China, in 2015. Using the fine-tuned DNN model, we achieved an overall accuracy (OA) of 86.4% and a kappa coefficient of 0.796, based on independent validation samples. The accuracy was the same and even surpassed that of existing land-cover maps for Beijing. The fine-tuned DNN outperformed the random forest (RF) (OA = 70.2%, kappa = 0.588) and support vector machine (SVM) models (OA = 68.7%, kappa = 0.555) using the same training samples, showing that the need for training samples in land-cover classification can be met by combining deep learning models, existing global land-cover products, and recycled training samples.
Read full abstract