Backdoor attacks against Deep Neural Networks (DNNs) have surfaced as a substantial and concerning security challenge. These backdoor vulnerabilities in DNNs can be introduced by third-party sources through maliciously manipulated training data. Existing backdoor attacks are primarily built on perturbation trigger patterns in the spatial domain, which makes practical deployment arduous due to the ease of detection by inspectors. Moreover, shortcut learning renders the backdoor network less robust against defense methods. This work advances an effective and adaptable approach to backdoor attacks, situated in the frequency domain. This methodology involves the incorporation of specific natural perturbations within the frequency domain of images. Remarkably, these introduced triggers yield minimal alterations in the image’s semantic content, rendering them nearly imperceptible to human observers. To evade detection by machine-based defenders, we introduce a new training paradigm that incorporates negative sampling techniques. This approach compels the neural network to learn richer differences as trigger patterns. We evaluate our attacks on popular convolutional neural networks, visual transformers, and MLP-Mixer models as well as four standard datasets including MNIST, CIFAR-10, GTSRB, and ImageNet. Experimental results demonstrate that the trained networks can be successfully injected with backdoors. Our attack methods exhibit remarkable efficacy, achieving high attack success rates in both All-to-one (near 100% on all datasets) and All-to-all (over 90% except on ImageNet) scenarios and also demonstrate robustness against contemporary state-of-the-art defense mechanisms. Furthermore, our work reveals that DNNs can capture discrepancies in the frequency components of images that are barely perceptible to humans.