Deep neural networks have achieved remarkable success in the field of computer vision. However, they are susceptible to adversarial attacks. The transferability of adversarial samples has made practical black-box attacks feasible, underscoring the importance of research on transferability. Existing work indicates that adversarial samples tend to overfit to the source model, getting trapped in local optima, thereby reducing the transferability of adversarial samples. To address this issue, we propose the Random Noise Transfer Attack (RNTA) to search for adversarial samples in a larger data distribution, seeking the global optimum. Specifically, we suggest injecting multiple random noise perturbations into the sample before each iteration of sample optimization, effectively exploring the decision boundary within an extended data distribution space. By aggregating gradients, we identify a better global optimum, mitigating the issue of overfitting to the source model. Through extensive experiments on the large-scale visual classification task on ImageNet, we demonstrate that our method increases the success rate of momentum-based attacks by an average of 20.1%. Furthermore, our approach can be combined with existing attack methods, achieving a success rate of 94.3%, which highlights the insecurity of current models and defense mechanisms.