Abstract

ABSTRACT Current research investigations using remotely sensed images are offered with a plethora of sources to explore land cover/land use applicability. Some of the recent advances have shown the advantage of fusing different data sources in land-cover analysis. Though intuitively combined processing of multi-modal imagery should provide better classification of land cover, there are not many work towards this direction and a theoretical framework is not laid out properly. In this work, we are providing such a framework where scattering and spectral properties (from synthetic aperture radar and multi-spectral images, respectively) of ground materials are used to distinguish land-cover classes with higher precision. Different kinds of information that are represented by these two modes of imageries are semantically bridged to infer more distinguishable land-cover classes in an unsupervised framework. The proposed technique is implemented in two phases, i.e., (1) sampling of seed pixels from imageries, and (2) training of representative features and prediction of classes using random forest classifier. Experimental results also show the effectiveness of this fusion of multi-modal image characteristics in classifying the underlying land cover.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call