Abstract

Detailed land use and land cover (LULC) information is one of the important information for land use surveys and applications related to the earth sciences. Therefore, LULC classification using very-high resolution remotely sensed imagery has been a hot issue in the remote sensing community. However, it remains a challenge to successfully extract LULC information from very-high resolution remotely sensed imagery, due to the difficulties in describing the individual characteristics of various LULC categories using single level features. The traditional pixel-wise or spectral-spatial based methods pay more attention to low-level feature representations of target LULC categories. In addition, deep convolutional neural networks offer great potential to extract high-level features to describe objects and have been successfully applied to scene understanding or classification. However, existing studies has paid little attention to constructing multi-level feature representations to better understand each category. In this paper, a multi-level feature representation framework is first designed to extract more robust feature representations for the complex LULC classification task using very-high resolution remotely sensed imagery. To this end, spectral reflection and morphological and morphological attribute profiles are used to describe the pixel-level and neighborhood-level information. Furthermore, a novel object-based convolutional neural networks (CNN) is proposed to extract scene-level information. The object-based CNN method combines advantages of object-based method and CNN method and can perform multi-scale analysis at the scene level. Then, the random forest method is employed to carry out the final classification using the multi-level features. The proposed method was validated on three challenging remotely sensed imageries including a hyperspectral image and two multispectral images with very-high spatial resolution, and achieved excellent classification performances.

Highlights

  • Land use and land cover (LULC) information is an essential part of various geospatial applications in urban areas, such as urban planning, land resource survey and management, and environmental monitoring [1]

  • For the classification tasks with high level semantic categories, many land use types are defined according to their functional properties, are difficult to capture their distinctive features from spectral, texture, shape or spatial structure features individually in very high resolution (VHR) images

  • We propose a framework of exploiting different spatial level information to address the complex LULC classification using hyperspectral or multispectral VHR remotely sensed imagery

Read more

Summary

Introduction

Land use and land cover (LULC) information is an essential part of various geospatial applications in urban areas, such as urban planning, land resource survey and management, and environmental monitoring [1]. With the development of very high resolution (VHR) remotely sensed imagery it becomes possible to extract very detailed level LULC information [3,4]. For the classification tasks with high level semantic categories, many land use types are defined according to their functional properties, are difficult to capture their distinctive features from spectral, texture, shape or spatial structure features individually in VHR images. With complex and diverse characteristics of LULC categories in urban areas, it remains challenging to complete successful LULC classification task based on VHR remote sensed imagery. Developing advanced feature representation and classification techniques to effectively utilize features of VHR images is important to improve the quality of remotely sensed imagery based LULC mapping in urban areas

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call