Abstract

Super-resolution mapping (SRM) can effectively predict the spatial distribution of land cover classes within mixed pixels at a higher spatial resolution than the original remotely sensed imagery. The uncertainty of land cover fraction errors within mixed pixels is one of the most important factors affecting SRM accuracy. Studies have shown that SRM methods using deep learning techniques have significantly improved land cover mapping accuracy but have not coped well with spectral–spatial errors. This study proposes an end-to-end SRM model using a spectral–spatial generative adversarial network (SGS) with the direct input of multispectral remotely sensed imagery, which deals with spectral–spatial error. The proposed SGS comprises the following three parts: first, cube-based convolution for spectral unmixing is adopted to generate land cover fraction images. Second, a residual-in-residual dense block fully and jointly considers spectral and spatial information and reduces spectral errors. Third, a relativistic average GAN is designed as a backbone to further improve the super-resolution performance and reduce spectral–spatial errors. SGS was tested in one synthetic and two realistic experiments with multi/hyperspectral remotely sensed imagery as the input, comparing the results with those of hard classification and several classic SRM methods. The results showed that SGS performed well at reducing land cover fraction errors, reconstructing spatial details, removing unpleasant and unrealistic land cover artifacts, and eliminating false recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call