Abstract

In the past few years, multitask learning (MTL) has been widely used in a single model to solve the problems of multiple businesses. MTL enables each task to achieve high performance and greatly reduces computational resource overhead. In this work, we designed a collaborative network that simultaneously solves the super-resolution semantic segmentation and super-resolution image reconstruction. This algorithm can obtain high-resolution semantic segmentation and super-resolution reconstruction results by taking relatively low-resolution images as input when high-resolution data are inconvenient or computing resources are limited. The framework consists of three parts: the semantic segmentation branch (SSB), the super-resolution branch (SRB), and the structural affinity block (SAB). Specifically, the SSB, SRB, and SAB are responsible for completing super-resolution semantic segmentation, image super-resolution reconstruction, and associated features, respectively. Our proposed method is simple and efficient, and it can replace the different branches with most of the state-of-the-art models. The International Society for Photogrammetry and Remote Sensing (ISPRS) segmentation benchmarks were used to evaluate our models. In particular, super-resolution semantic segmentation on the Potsdam dataset reduced Intersection over Union (IoU) by only 1.8% when the resolution of the input image was reduced by a factor of two. The experimental results showed that our framework can obtain more accurate semantic segmentation and super-resolution reconstruction results than the single model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call