Advanced artificial intelligence (AI) models typically achieve high accuracy in wind power forecasting, but their internal mechanisms lack interpretability, which undermines user confidence in forecast value and strategy execution. To this end, this paper aims to investigate the interpretability of AI models, which is crucial but usually overlooked in wind power forecasting. Specifically, four model-agnostic explainable artificial intelligence (XAI) techniques (i.e., Shapley additive explanations, permutation feature importance, partial dependence plot, and local interpretable model-agnostic explanations) are tailored to provide global and instance interpretability for AI models in wind power forecasting. Then, several metrics are proposed to evaluate the trustworthiness of interpretations provided by XAI techniques. Simulation results demonstrate that the proposed XAI techniques can not only identify important features from wind power datasets, but also enable the understanding of the contribution of each feature to the forecast power output for a specific sample. Furthermore, the proposed evaluation metrics aid users in comprehensively assessing the trustworthiness of XAI techniques in wind power forecasting, enabling them to judiciously select suitable XAI techniques for their AI models.