Abstract

Single-view height estimation and semantic segmentation have received increasing attention in recent years and play an important role in the photogrammetry and remote sensing communities. The height information and semantic information of images are correlated, and some recent works have shown that multi-task learning methods can achieve complementation of task-related features and improve the prediction results of the multiple tasks. Although much progress has been made in recent works, how to effectively extract and fuse height features and semantic features is still an open issue. In this paper, a self- and cross-enhancement network (SCE-Net) is proposed to jointly perform height estimation and semantic segmentation on single aerial images. A feature separation–fusion module is constructed to effectively separate and fuse height features and semantic features based on an attention mechanism for feature representation enhancement across tasks. In addition, a height-guided feature distance loss and a semantic-guided feature distance loss are designed based on deep metric learning to achieve task-aware feature representation enhancement. Extensive experiments are conducted on the Vaihingen dataset and the Potsdam dataset to verify the effectiveness of the proposed method. The experimental results demonstrate that the proposed SCE-Net could outperform the state-of-the-art methods and achieve better performance in both height estimation and semantic segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call