Abstract

Synthetic aperture radar (SAR) provides weather-independent and day-and-night images for Earth monitoring. It is, however, well known that the discriminative power of SAR is lower than that of Electrooptic (EO), leading to relatively weak recognition models when employing SAR images alone. This letter proposes a representation learning framework for SAR building segmentation, which incorporates privileged and corresponding EO information at training time. We show that, through knowledge distillation, the learned network can reproduce rich representations of EO images with high fidelity, and attains improved segmentation performance. Consequently, only SAR images are considered at testing time, and no EO data are required. We further introduce a geometric ensemble loss, regularizing the network's predictions to be invariant under geometric transformations. It handles arbitrary viewing angles (directions of layover) of airborne SAR sensor, and thus helps to produce geometrically consistent results. Experimental results on the SpaceNet 6 data set demonstrate the effectiveness and the flexibility of the proposed framework, which outperforms state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call