In recent years, backdoor attacks have posed significant security threats to the training process of deep neural networks (DNNs). Attackers attempt to embed triggers in the training set, causing the victim model to behave normally when processing benign samples, but maliciously alter predictions when the hidden backdoor is activated by these triggers. However, our research found that when the trigger locations differ between the training and test data, the success rate of backdoor attacks decreases. This finding was consistent across different models and datasets through our experiments. To address the issue of decreased attack success rates caused by inconsistent trigger distributions between training and test data, we propose a mixed training backdoor attack method. This method enhances the generalization of attacks by constructing backdoor attack data with triggers in various positions during the training phase. When applying this mixed training backdoor attack approach to the BadNets and Blended attack methods, the ASR (Attack Success Rate) on test sets improved by up to 99.57% and 94.96%, respectively, across different models and datasets.