Abstract

This study shows a comparison between pixel-based and object-based approaches in data fusion of high-resolution multispectral GeoEye-1 imagery and high-resolution COSMO-SkyMed SAR data for land-cover/land-use classification. The per-pixel method consisted of a maximum likelihood classification of fused data based on discrete wavelet transform and a classification from optical images alone. Optical and SAR data were then integrated into an object-oriented environment with the addition of texture measurements from SAR and classified with a nearest neighbor approach. Results were compared with the classification of the GeoEye-1 data alone and the outcomes pointed out that per-pixel data fusion did not improve the classification accuracy, while the object-based data integration increased the overall accuracy from 73% to 89%. According to results, an object-based approach with the introduction of adjunctive information layers proved to be more performing than standard pixel-based methods in landcover/ land-use classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call