Abstract

Pixel-wise classification in remote sensing identifies entities in large-scale satellite-based images at the pixel level. Few fully annotated large-scale datasets for pixel-wise classification exist due to the challenges of annotating individual pixels. Training data scarcity inevitably ensues from the annotation challenge, leading to overfitting classifiers and degraded classification performance. The lack of annotated pixels also necessarily results in few hard examples of various entities critical for generating a robust classification hyperplane. To overcome the problem of the data scarcity and lack of hard examples in training, we introduce a two-step hard example generation (HEG) approach that first generates hard example candidates and then mines actual hard examples. In the first step, a generator that creates hard example candidates is learned via the adversarial learning framework by fooling a discriminator and a pixel-wise classification model at the same time. In the second step, mining is performed to build a fixed number of hard examples from a large pool of real and artificially generated examples. To evaluate the effectiveness of the proposed HEG approach, we design a 9-layer fully convolutional network suitable for pixel-wise classification. Experiments show that using generated hard examples from the proposed HEG approach improves the pixel-wise classification model's accuracy on red tide detection and hyperspectral image classification tasks.

Highlights

  • P IXEL-WISE classification is the task of identifying entities at the pixel level in remotely sensed images, such as Earth-observing satellite-based images from multi- or hyperspectral imaging sensors

  • From this peculiar GOCI image setting, we found severe issues highlighting the need for the proposed hard example generation (HEG) approach

  • To meet the need for hard examples in devising an accurate hyperplane with a small number of examples that can be adequately applied to test examples, we introduce hard example generation (HEG) approach

Read more

Summary

Introduction

P IXEL-WISE classification is the task of identifying entities at the pixel level in remotely sensed images, such as Earth-observing satellite-based images from multi- or hyperspectral imaging sensors. Image segmentation methods treat an image as a composition of multiple instances of a scene or object and delineate boundaries between different instances. Current state-of-the-art image segmentation methods adopt the ability to segment these instances either by using a joint detection and segmentation model [1] or by finetuning a detection model [2]. These detection abilities are only useful if the target object or scene provides category-specific contextual or structural information and if each instance covers a relatively large area of the image

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.