Abstract

As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor.

Highlights

  • With the collection of imageries from spaceborne sensors/platforms such as WorldView constellations, IKONONS, GeoEye, Pleiades, Planet, etc., the volume of available VHR images has increased to an unprecedented level, and there exist a large body of approaches developed to address the classification of VHR data, from simple statistical learning based spectrum classification, spatial–spectral feature extraction, towards the recently popularized deep learning (DL)

  • The rest of this review is organized as follows: In Sections 1.2 and 1.3, we briefly review the existing issues of landcover mapping using VHR and the related efforts to address those issues; In Section 2, we provide a concise illustration of landcover classification paradigms using VHR from the perspective of the analysis unit to engage necessary contents and basics; In Section 3, we elaborate on existing approaches addressing the data challenges for remote sensing (RS) landcover classification

  • Both the traditional and DL methods for landcover classification require that the annotated training samples are somewhat similar to the images, which is difficult to meet, since data of varying geographical regions present vastly different land patterns that are impossible to encapsulate within one single training dataset

Read more

Summary

A Review of Landcover Classification with Very-High

Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability. Translational Data Analytics Institute, The Ohio State University, Pomerene Hall, 1760 Neil Ave, College of Forest Resources and Environmental Science, Michigan Technological University, 1400 Townsend Drive, Houghton, MI 49931, USA. Ecosystem Science Center, Michigan Technological University, 1400 Townsend Drive, Houghton, MI 49931, USA

Introduction
Scope and Organization of This Paper
Existing Challenges in the Landcover Classification with VHR Images
Intra-Class Variability and Inter-Class Similarity for VHR Data
Sample
Model and Scene Transferability
An Overview of Typical Landcover Classification Methods
Pixel-Based Mapping Method
Semantic Segmentation
Literature Review of Landcover Classification Methods Addressing the Data
Methods
Weak Supervision and Semi-Supervision for Noisy and Incomplete Training Sets
Incomplete Samples
Inexact Samples
Inaccurate Samples
Transfer Learning and Domain Adaptation for RS Classification
Domain Adaptation
Model Fine-Tuning
Pixel-Level Fusion ofof information from twotwo sources to generate a new
Feature-Level Fusion
Decision-Level Fusion
Multi-View Fusion
Findings
Final Remarks and Future Needs
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.