As a powerful expression of human knowledge in a structural form, knowledge graph (KG) has drawn great attention from both the academia and the industry and a large number of construction and application technologies have been proposed. Large-scale knowledge graphs such as DBpedia, YAGO and Wikidata are published and widely used in various tasks. However, most of them are far from perfect and have many quality issues. For example, they may contain inaccurate or outdated entries and do not cover enough facts, which limits their credibility and further utility. Data quality has a long research history in the field of traditional relational data and recently attracts more knowledge graph experts. In this paper, we provide a systematic and comprehensive review of the quality management on knowledge graphs, covering overall research topics about not only quality issues, dimentions and metrics, but also quality management processes from quality assessment and error detection, to error correction and KG completion. We categorize existing works in terms of target goals and used methods for better understanding. In the end, we discuss some key issues and possible directions on knowledge graph quality management for further research.