Abstract

Mixed pixel problem is omnipresent in remote sensing images for urban land use interpretation due to the hardware limitations. Subpixel mapping (SPM) is a usual way to solve this problem by improving the observation scale and realizing a finer spatial resolution land cover mapping. Recently, deep learning-based subpixel mapping network (DLSMNet) was proposed, benefited from its strong representation and learning ability, to restore a visually pleasing finer mapping. However, the spatial context features of artifacts are usually aggregated and progressively lost during the forward pass of the network without sufficient representation, which make it difficult to be learned and restored. In this article, a semantic information modulated (SIM) deep subpixel mapping network (SIMNet) is proposed, which uses low-resolution semantic images as prior, to reinforce the representation of spatial context features. In SIMNet, SIM module is proposed to parametrically incorporate the semantic prior into the state-of-the-art (SOTA) feed forward network architecture in an end-to-end training fashion. Furthermore, stacked SIM module with residual blocks (SIM_ResBlock) is adopted to pass the representation of spatial context feature to the deep layers, to get it fully learned during backpropagation. Experiments have been implemented on three public urban scenario data sets, and the SIMNet generates a clearer outline of artificial facilities with sufficient spatial context, and is distinctive for even individual building, which is challenging for other SOTA DLSMNet. The results demonstrate that the proposed SIMNet is a promising way for high-resolution urban land use mapping from easily available lower resolution remote sensing images.Mixed pixel problem is omnipresent in remote sensing images for urban land use interpretation due to the hardware limitations. Subpixel mapping (SPM) is a usual way to solve this problem by improving the observation scale and realizing a finer spatial resolution land cover mapping. Recently, deep learning-based subpixel mapping network (DLSMNet) was proposed, benefited from its strong representation and learning ability, to restore a visually pleasing finer mapping. However, the spatial context features of artifacts are usually aggregated and progressively lost during the forward pass of the network without sufficient representation, which make it difficult to be learned and restored. In this article, a semantic information modulated (SIM) deep subpixel mapping network (SIMNet) is proposed, which uses low-resolution semantic images as prior, to reinforce the representation of spatial context features. In SIMNet, SIM module is proposed to parametrically incorporate the semantic prior into the state-of-the-art (SOTA) feed forward network architecture in an end-to-end training fashion. Furthermore, stacked SIM module with residual blocks (SIM_ResBlock) is adopted to pass the representation of spatial context feature to the deep layers, to get it fully learned during backpropagation. Experiments have been implemented on three public urban scenario data sets, and the SIMNet generates a clearer outline of artificial facilities with sufficient spatial context, and is distinctive for even individual building, which is challenging for other SOTA DLSMNet. The results demonstrate that the proposed SIMNet is a promising way for high-resolution urban land use mapping from easily available lower resolution remote sensing images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call