Transfer-based attacks utilize the proxy model to craft adversarial examples against the target model and make significant advancements in the realm of black-box attacks. Recent research suggests that these attacks can be enhanced by incorporating adversarial defenses into the training process of adversarial examples. Specifically, adversarial defenses supervise the training process, forcing the attacker to overcome greater challenges and produce more robust adversarial examples with enhanced transferability. However, current methods mainly rely on limited input transformation defenses, which apply only linear affine changes. These defenses are insufficient for effectively removing harmful content from adversarial examples, resulting in restricted improvements in their transferability. To address this issue, we propose a novel training framework named Transfer-based Attacks through Hypothesis Defense (TA-HD). This framework enhances the generalization of adversarial examples by integrating a hypothesis defense mechanism into the proxy model. Specifically, we propose an input denoising network as the hypothesis defense to effectively remove harmful noise from adversarial examples. Furthermore, we introduce an adversarial training strategy and design specific adversarial loss functions to optimize the input denoising network’s parameters. The visualization of the training process demonstrates the effective denoising capability of the hypothesized defense mechanism and the stability of the training process. Extensive experiments show that the proposed training framework significantly improves the success rate of transfer-based attacks by up to 19.9%. The code is available at https://github.com/haolingguang/TA-HD.