Deep learning models have emerged as strong and efficient tools that can be applied to a broad spectrum of complex learning problems and many real-world applications. However, more and more works show that deep models are vulnerable to adversarial examples. Compared to vanilla attack settings, this paper advocates a more practical setting of data-free black-box attack, for which the attackers can completely not access the structures and parameters of the target model, as well as the intermediate features and any training data associated with the model. To tackle this task, previous methods generate transferable adversarial examples from a transparent substitute model to the target model. However, we found that these works have the limitations of taking static substitute model structure for different targets, only using hard synthesized examples once, and still relying on data statistics of the target model. This may potentially harm the performance of attacking the target model. To this end, we propose a novel Dynamic Routing and Knowledge Re-Learning framework (DraKe) to effectively learn a dynamic substitute model from the target model. Specifically, given synthesized training samples, a dynamic substitute structure learning strategy is proposed to adaptively generate optimal substitute model structure via a policy network according to different target models and tasks. To facilitate the substitute training, we present a graph-based structure information learning to capture the structural knowledge learned from the target model. For the inherent limitation that online data generation can only be learned once, a dynamic knowledge re-learning strategy is proposed to adjust the weights of optimization objectives and re-learn hard samples. Extensive experiments on four public image classification datasets and one face recognition benchmark are conducted to evaluate the efficacy of our Drake. We can obtain significant improvement compared with state-of-the-art competitors. More importantly, our DraKe consistently achieves attack superiority for different target models (e.g., residual networks, and vision transformers), showing great potential for complex real-world applications.