Federated learning shows promise as a privacy-preserving collaborative learning technique. Existing research mainly focuses on skewing the class distribution across clients. However, most approaches suffer from catastrophic forgetting and classifier shift, mainly when the global distribution of all classes is extremely unbalanced and the data distribution of the client dynamically evolves over time. In this paper, we study the Dynamic Heterogeneous Federated Learning, which addresses the practical scenario where heterogeneous data distributions exist among different clients and dynamic tasks within the client. Accordingly, we propose a novel federated learning framework named Federated Multi-Level Prototypes and design federated multi-level regularizations. To mitigate classifier shift, we construct semantic prototypes to provide fruitful generalization knowledge. To maintain the model stability and consistency convergence, three regularizations are introduced as training losses, i.e., prototype-based regularization, semantic prototype-based regularization, and federated inter-task regularization. Extensive experiments show that the proposed method achieves advanced performance in various settings.