Previous works have pointed out that deep learning models are vulnerable to adversarial examples crafted by adding negligible confusing noise to clean images. However, adversarial examples are hard to transfer to attack other defense models or do not conform to human-imperceptible under the black-box setting. To bridge this gap, a variable adversarial attack method based on filtering called VFI-FGSM is proposed. Variable Step Size Method (VSSM) is designed to adaptively control perturbation to avoid overfitting, the Filtering based Iterative Fast Gradient Sign Method (FI-FGSM) is explored to enhance robustness of attack, an auxiliary loss of high-level representation is added to get a more precise gradient direction and improve the attack success rate, and a new tactic named the contribution for reducing perturbation size(CRPS) are presented to measure the performance of attack algorithms in the paper. The results show that scheme proposed increases the attack success rate by an average of 14.36% in ensemble attack method and 7.9% on combined trained model for defense models in black-box setting. Fortunately, the noise is perceptually small in both L1 and L2 metrics.