We are witnessing a fast development of Artificial Intelligence (AI), but it becomes dramatically challenging to explain AI models in the past decade. “Explanation” has a flexible philosophical concept of “satisfying the subjective curiosity for causal information”, driving a wide spectrum of methods being invented and/or adapted from many aspects and communities, including machine learning, visual analytics, human-computer interaction and so on. Nevertheless, from the view-point of data and knowledge engineering (DKE), a best explaining practice that is cost-effective in terms of extra intelligence acquisition should exploit the causal information and scenarios which is hidden richly in the data itself. In the past several years, there are plenty of works contributing in this line but there is a lack of a clear taxonomy and systematic review of the current effort. To this end, we propose this survey, reviewing and taxonomizing existing efforts from the view-point of DKE, summarizing their contribution, technical essence and comparative characteristics. Specifically, we categorize methods into data-driven methods where explanation comes from the task-related data, and knowledge-aware methods where extraneous knowledge is incorporated. Furthermore, in the light of practice, we provide survey of state-of-art evaluation metrics and deployed explanation applications in industrial practice.