Abstract

Highly accurate classifications of urban Land Use/Land Cover (LULC) are critical for many remote sensing applications in urban geography. A review of the literature shows that there is no one method that consistently achieves this. Therefore, the goal of this study is to investigate the ability of out-of-box, pixel-based, medium-resolution image fusion classifications to increase the thematic classification accuracy of urban LULC mapping. The objectives identified to achieve this goal were to (1) establish a baseline of nonfusion classification results using supervised and unsupervised algorithms and (2) test 2 fusion methods to explore whether they would improve classification results. The unsupervised optical classification and the image fusion using radar products (H/A/α) with optical and near-infrared (NIR) bands both achieved excellent overall accuracies (89.4% and 89.1%, respectively). Lower accuracies were obtained using a principal components analysis of both optical and radar data. Overall, optical-radar image fusion improved urban LULC classification. The choice of how LULC classes are aggregated and which algorithm is chosen were found to be critical in achieving adequate results for urban mapping.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call