Few-shot learning and meta-learning have been widely studied for their ability to reduce the burden of data annotation. However, real-world applications often involve training and target tasks from different source and target domains, leading to poor generalization of existing methods. To address this issue, we propose a novel method called RDProtoFusion, which leverages refined discriminative class prototypes to achieve task augmentation and bridge the domain shift. Firstly, we refine naive prototypes with query samples to avoid computation bias caused by the limited support size. Secondly, we perform multi-task fusion through class-representative prototypes, resulting in flexible task-augmentation with low complexity that effectively alleviates overfitting. Thirdly, we propose a prototype contrastive loss to obtain robust and accurate prototypes, by enhancing the discrimination between class prototypes. The proposed prototype refinement, fusion and contrast can improve generalization of the model effectively. We conduct extensive experiments on multiple cross-domain few-shot learning benchmarks, demonstrating that RDProtoFusion achieves state-of-the-art performance. Overall, our proposed method shows great potential for addressing the challenge of domain shift in few-shot learning, offering a promising avenue for real-world applications.