Local descriptor is an important upstream component in computer vision tasks. Despite considerable advances with deep learning-based descriptors, recent descriptors are not robust enough to handle widespread viewpoint changes in image matching tasks such as localization and 3D reconstruction. In this study, SDNet, a robust descriptor utilizing spatial adversarial perturbations, trained with a novel dynamic probabilistic weighting loss to enhance performance under such challenges. First, to increase the robustness and generalization ability of the network across spatially transformed instances, a innovative module for generating hard negative samples via spatial adversarial perturbations is designed. By maximizing adversarial loss, this module generates more complex patches, significantly enhancing the geometric robustness of the descriptor. Importantly, this module integrates seamlessly with existing patch-based descriptors without necessitating extra training data. Second, to mitigate the imbalance in the matching relationship between generated positive and negative pairs, the label weighted triplet loss is proposed, which markedly improves descriptor performance. Third, a comprehensive theoretical analysis of preceding studies is carried out from a gradient perspective, and a probabilistic dynamic weighting approach that adaptively emphasizes weighting functions with higher likelihoods is proposed to improve training performance of the descriptor. Extensive experiments are carried out on mainstream datasets. These comprehensive experiments demonstrate the effectiveness of SDNet, and the proposed method achieves significant improvements on the UBC, HPatches and ETH datasets, outperforming current state-of-the-art methods. The code is available at https://github.com/webd111/sdnet.