Although the standard meta-learning methods have demonstrated strong performance in few-shot image classification scenarios, the models typically lack the capability to assess the reliability of their predictions, which can lead to risks in certain applications. Aiming at this problem, we first propose a meta-learning-based Evidential Deep Learning (EDL) called Meta Evidence Deep Learning (MetaEDL), which enables reliable prediction in the few-show image classification scenario. Being the same as general meta-learning methods, MetaEDL commonly employed shallow neural networks as feature extractors to avoid overfitting when dealing with few-shot samples, which significantly restricts the model’s ability to extract features. To further address this limitation, we propose a Meta Transfer Evidence Deep Learning (MetaTEDL) to address the few-shot trustworthy classification issue. MetaTEDL adopts a large-scale pre-trained neural network as its feature extractor. In the meta-training process, we only train two lightweight neuron operations Scaling and Shifting to reduce the risk of over-fitting. Then, two evidential head neural networks are trained to integrate evidence from different sources, aiming to improve the quality of the evidence output. We conduct comprehensive experiments on several challenging few-shot classification benchmarks. The results indicate that our proposed method not only outperforms other conventional meta-learning methods in terms of few-shot classification performance, but also has good UQ (uncertainty quantification), Uncertainty-guided active learning, and OOD (Out-of-Distribution) detection capabilities.