Abstract

ABSTRACT The availability and usage of optical very high spatial resolution (VHR) satellite images for efficient support of refugee/IDP (internally displaced people) camp planning and humanitarian aid are growing. In this research, an integrated approach was used for dwelling classification from VHR satellite images, which applied the preliminary results of a convolutional neural network (CNN) model as input data for an object-based image analysis (OBIA) knowledge-based semantic classification method. Unlike standard pixel-based classification methods that usually are applied for the CNN model, our integrated approach aggregates CNN results on separately delineated objects as the basic units of a rule-based classification, to include additional prior-knowledge and spatial concepts in the final instance segmentation. An object-based accuracy assessment methodology was used to assess the accuracy of the classified dwelling categories on a single object-level. Our findings reveal accuracies of more than 90% for each applied parameter of precision, recall and F1-score. We conclude that integrating the CNN models with the OBIA capabilities can be considered an efficient approach for dwelling extraction and classification, integrating not only sample derived knowledge but also prior-knowledge about refugee/IDP camp situations, like dwellings size constraints and additional context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call