Deep learning systems have been shown to be vulnerable to adversarial examples, but most existing works focus on manipulating and attacking images in the digital domain. Although some recent research has proposed physical attacks using Expectation Over Transformation (EoT) methods, these approaches are limited to specific classifiers and often require a significant amount of sample collection, posing challenges for efficient utilization. In this paper, we address these issues by introducing the Adversarial Fast Autoaugmentation (AFA) method, which streamlines the process of collecting training samples, thereby alleviating the sample collection pressure. We further propose the AFA-based multi-sample ensemble method (AFA-MSEM) and AFA-based most-likely ensemble method (AFA-MLEM) to achieve adversarial attacks that effectively deceive classifiers in both the digital and real-world scenario. Additionally, our adaptive norm algorithm enables the crafting of faster and smaller perturbations compared to state-of-the-art attack methods. Moreover, our proposed AFA-MLEM, extended with a weighted objective function, is capable of generating robust adversarial examples that can simultaneously mislead multiple classifiers (Inception-v3, Inception-v4, ResNet-v2, and Inception-ResNet-v2) in real-world scenarios. Experimental results demonstrate that our adversarial attack can achieve higher success rates and exhibit resilience against multi-model defense systems, outperforming other existing methods. Overall, our proposed adversarial attack methods offer improved effectiveness, efficiency, and robustness, making them valuable contributions to the field of adversarial attacks in deep learning systems.