Abstract
Abstract. Fusion of remote sensing images and LiDAR data provides complimentary information for the remote sensing applications, such as object classification and recognition. In this paper, we propose a novel multi-source multi-scale hierarchical conditional random field (MSMSH-CRF) model to integrate features extracted from remote sensing images and LiDAR point cloud data for image classification. MSMSH-CRF model is then constructed to exploit the features, category compatibility of multi-scale images and the category consistency of multi-source data based on the regions. The output of the model represents the optimal results of the image classification. We have evaluated the precision and robustness of the proposed method on airborne data, which shows that the proposed method outperforms standard CRF method.
Highlights
In order to overcome the limitations of the aforementioned methods, we present a novel multi-source multi-scale hierarchical conditional random field (MSMSH-Conditional Random Fields (CRFs)) model to fuse features extracted from remote sensing images and LiDAR point cloud data for image classification
Based on the standard CRF model (Shotton et al, 2009), (Yang and Forstner, 2011a) introduce a hierarchical conditional random field to deal with the problem of image classification by modeling spatial and hierarchical structures. (Perez et al, 2012) formulate a multi-scale CRF model to deal with the problem of region labeling in multispectral remote sensing images. (Zhang et al, 2013) propose the multi-source hierarchical conditional random field (MSHCRF) model to fuse features extracted from remote sensing images and LiDAR point cloud data for image classification
For the fairness of comparison, both the training set and the testing set are same for MSMSH-CRF, MSHCRF and standard CRF respectively
Summary
Fusion of remote sensing images and LiDAR data provides complimentary information for the remote sensing applications, such as object classification and recognition. Many methods have been developed for the fusion of remote sensing images and LiDAR data. In general those methods are classified into three categories, namely image fusion (Parmehr et al, 2012), feature fusion (Dalponte et al, 2012, Deng and Su, 2012), and decision fusion (Huang et al, 2011, Shimoni et al, 2011). In the feature fusion methods, the features are usually extracted independently from different data source, and the fusion lacks consideration of correspondence of location and contextual information, by which the classification could be improved
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.