Abstract

Collecting accurate reference data for training and validating remotely sensed land use/land cover (LULC) maps is important. Financial and logistical constraints on projects mean it may not be possible to collect field data. Automated photo recognition methods using deep learning are increasingly being used for mapping land cover (the physical surface at a location). Their use in mapping land use (how the surface is used) is less well developed. This study explores an approach of using geotagged ground-level photographs to identify management intensity on agricultural grasslands. Management intensity on grasslands has implications for soil and water quality, biodiversity and habitat loss, and carbon fluxes. Uncertainty in the level of management intensity occurring at field scale is a source of error in greenhouse gas emission estimates from grasslands. Our study uses convolutional neural networks (CNN) to automate the labelling of unseen images into three management classes (intensive, extensive, abandoned). Ground-level photographs were taken during the 2018 EUROSTAT Land Use/Coverage Area Frame Survey (LUCAS). Accuracy up to 92.8% was achieved. Predicted labels were used to train a management intensity map using Sentinel 1 and Sentinel 2 satellite imagery. Overall accuracy for a random forest (RF) classification was 84.8% (64.8% after class imbalance was accounted for). Potential improvements to both the CNN and RF models are discussed. This study demonstrates the potential of CNN for classifying grassland use from geotagged photographs as part of automated LULC mapping workflows.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call