Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding imperceptible perturbations to benign examples. Increasing the attack success rate usually requires a larger noise magnitude, which leads to noticeable noise. To this end, we propose a Transformed Gradient method (TG), which achieves a higher attack success rate with lower perturbations against the target model, i.e. an ensemble of black-box defense models. It consists of three steps: original gradient accumulation, gradient amplification, and gradient truncation. Besides, we introduce the Fre´chet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS) respectively to evaluate fidelity and perceived distance from the original example, which is more comprehensive than only using L∞ norm as evaluation metrics. Furthermore, we propose optimizing coefficients of the source-model ensemble to improve adversarial attacks. Extensive experimental results demonstrate that the perturbations of adversarial examples generated by our proposed method are less than the state-of-the-art baselines, namely MI, DI, TI, RF-DE based on vanilla iterative FGSM and their combinations. Compared with the baseline method, the average black-box attack success rate and total score are improved by 6.6% and 13.8, respectively. We make our codes public at Github https://github.com/Hezhengyun/Transformed-Gradient.