Abstract

Abstract. The spatial distribution of land cover in the urban area especially 3D objects (buildings and trees) is a fundamental dataset for urban planning, ecological research, disaster management, etc. According to recent advances in sensor technologies, several types of remotely sensed data are available from the same area. Data fusion has been widely investigated for integrating different source of data in classification of urban area. Thermal infrared imagery (TIR) contains information on emitted radiation and has unique radiometric properties. However, due to coarse spatial resolution of thermal data, its application has been restricted in urban areas. On the other hand, visible image (VIS) has high spatial resolution and information in visible spectrum. Consequently, there is a complementary relation between thermal and visible imagery in classification of urban area. This paper evaluates the potential of aerial thermal hyperspectral and visible imagery fusion in classification of urban area. In the pre-processing step, thermal imagery is resampled to the spatial resolution of visible image. Then feature level fusion is applied to construct hybrid feature space include visible bands, thermal hyperspectral bands, spatial and texture features and moreover Principle Component Analysis (PCA) transformation is applied to extract PCs. Due to high dimensionality of feature space, dimension reduction method is performed. Finally, Support Vector Machines (SVMs) classify the reduced hybrid feature space. The obtained results show using thermal imagery along with visible imagery, improved the classification accuracy up to 8% respect to visible image classification.

Highlights

  • The spatial distribution of land cover in the urban area is a fundamental dataset for urban planning, ecological research, change detection, disaster management, etc

  • Due to incompetence of just one dataset to model all characteristic of urban area and availability of different source of data, much attention has been recently paid to multi-sensor data fusion

  • The potential of the proposed method, implemented on two airborne datasets acquired at different spectral ranges and spatial resolutions: 1) a coarser-resolution LWIR hyperspectral image and 2) fine-resolution visible imagery

Read more

Summary

Introduction

The spatial distribution of land cover in the urban area is a fundamental dataset for urban planning, ecological research, change detection, disaster management, etc. Different types of remotely sensed data widely applied for classification of urban areas, such as LiDAR (Chehata et al 2009), aerial visible image (Myeong et al, 2001), satellite multispectral image (Moran, 2010), hyperspectral imagery (Samadzadegan et al 2012), etc. By recent advances in sensor technologies, different types of remotely sensed data are available from the same area. Due to incompetence of just one dataset to model all characteristic of urban area and availability of different source of data, much attention has been recently paid to multi-sensor data fusion. Multi-sensor data fusion seeks to integrate data from different sources to obtain more information than can be derived from a single sensor (Kumar et al 2015). The assessment of the capacity of several combination of dataset are investigated in the literatures, such as hyperspectral and LiDAR data (Liao et al 2015), optical image ad synthetic aperture radar (Zhu et al 2012), aerial image and LiDAR data (Huang et al 2011), optical image, thermal image and LiDAR (Brook et al 2012)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call