Abstract

Spatially explicit information on land cover (LC) is commonly derived using remote sensing, but the lack of training data still remains a major challenge for producing accurate LC products. Here, we develop a computer vision methodology to extract LC information from photos from the Land Use-Land Cover Area Frame Survey (LUCAS). Given the large number of photographs available and the comprehensive spatial coverage, the objective is to show how the automatic classification of photos could be used to develop reference data sets for training and validation of LC products as well as other purposes. We first selected a representative sample of 1120 photos covering eight major LC types across the European Union. We then applied semantic segmentation to these photos using a neural network (Deeplabv3+) trained with the ADE20k dataset. For each photo, we extracted the original LC identified by the LUCAS surveyor, the segmented objects, and the pixel count for each ADE20k class. Using the latter as input features, we then trained a Random Forest model to classify the LC of the photo. Examining the relationship between the objects/features extracted by Deeplabv3+ and the LC labels provided by the LUCAS surveyors demonstrated how the LC classes can be decomposed into multiple objects, highlighting the complexity of LC classification from photographs. The results of the classification show a mean F1 Score of 89%, increasing to 93% when the Wetland class is not considered. Based on these results, this approach holds promise for the automated retrieval of LC information from the rich source of LUCAS photographs as well as the increasing number of geo-referenced photos now becoming available through social media and sites like Mapillary or Google Street View.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call