Abstract
Segmentation convolutional neural networks (CNNs) are now popular for the semantic segmentation (i.e., dense pixel-wise labeling) of remote sensing imagery, such as color or hyperspectral satellite imagery. In recent years a large number of hand-labeled datasets of overhead imagery have emerged, leading to breakthrough performance for CNNs. However, these datasets are typically used in isolation of one another because they are either (i) annotated with heterogeneous object type labels, or (ii) they are collected over different geographic areas. This imposes a major bottleneck on the value of these datasets. In this work we present what we call a class-asymmetric loss function that makes it possible to train a single multi-class network using multiple datasets that are heterogeneously-labeled. We show, for example, that it is possible to train a segmentation algorithm for Buildings, roads, and background using two datasets: one annotated with buildings and one annotated with buildings. We propose a class asymmetric loss that under certain common conditions, allows for one to train models on datasets in which the target class is unlabeled.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.