Semi-supervised learning has gained significant attention in the field of remote sensing due to its ability to effectively leverage both a limited number of labeled samples and a large quantity of unlabeled data. An effective semi-supervised learning approach utilizes unlabeled samples to enforce prediction consistency under minor perturbations, thus reducing the model’s sensitivity to noise and suppressing false positives in change-detection tasks. This principle underlies consistency regularization-based methods. However, while these methods enhance noise robustness, they also risk overlooking subtle but meaningful changes, leading to information loss and missed detections. To address this issue, we introduce a simple yet efficient method called Sample Inflation Interpolation (SII). This method leverages labeled sample pairs to mitigate the information loss caused by consistency regularization. Specifically, we propose a novel data augmentation strategy that generates additional change samples by combining existing supervised change samples with calculated proportions of change areas. This approach increases both the quantity and diversity of change samples in the training set, effectively compensating for potential information loss and reducing missed detections. Furthermore, to prevent overfitting, small perturbations are applied to the generated sample pairs and their labels. Experiments conducted on two public change detection (CD) datasets validate the effectiveness of our proposed method. Remarkably, even with only 5% of labeled training data, our method achieves performance levels that closely approach those of fully supervised learning models.
Read full abstract