Subpixel mapping (SPM) addresses the widespread mixed pixel problem in remote sensing images by predicting the spatial distribution of land cover within mixed pixels. However, conventional pixel-based spectral unmixing, a key pre-processing step for SPM, neglects valuable spatial contextual information and struggles with spectral variability, ultimately undermining SPM accuracy. Additionally, while extensively utilized, supervised spectral unmixing is labor-intensive and user-unfriendly. To address these issues, this paper proposes a fully automatic, unsupervised object-based SPM (UO-SPM) model that exploits object-scale information to reduce spectral unmixing errors and subsequently enhance SPM. Given that mixed pixels are typically located at the edges of objects (i.e., the inner part of objects is characterized by pure pixels), segmentation and morphological erosion are employed to identify pure pixels within objects and mixed pixels at the edges. More accurate endmembers are extracted from the identified pure pixels for the secondary spectral unmixing of the remaining mixed pixels. Experimental results on 10 study sites demonstrate that the proposed unsupervised object (UO)-based analysis is an effective model for enhancing both spectral unmixing and SPM. Specifically, the spectral unmixing results of UO show an average increase of 3.65 % and 1.09 % in correlation coefficient (R) compared to Fuzzy-C means (FCM) and linear spectral mixture model (LSMM)-derived coarse proportions, respectively. Moreover, the UO-derived results of four SPM methods (i.e., Hopfield neural network (HNN), Markov random field (MRF), pixel swapping (PSA) and radial basis function interpolation (RBF)) exhibit an average increase of 5.89 % and 3.04 % in overall accuracy (OA) across the four SPM methods and 10 study sites compared to the FCM and LSMM-based results, respectively. Moreover, the proportions of both mixed and pure pixels are more accurately predicted. The advantage of UO-SPM is more evident when the size of land cover objects is larger, benefiting from more accurate identification of objects.
Read full abstract