Knowledge graphs (KG) can aggregate data and make information resources easier to calculate and understand. With tremendous advancements in knowledge graphs, they have been incorporated into plenty of software systems to assist various tasks. However, while KGs determine the performance of downstream software systems, their quality is often measured by the accuracy of test data. Considering the limitation of accessible high-quality test data, an automated quality assessment technique could fundamentally improve the testing efficiency of KG-driven software systems and save plenty of manual labeling resources.In this paper, we propose an automated approach to quantify the quality of KGs via differential testing. It first constructs multiple Knowledge Graph Embedding Models (KGEM) and conducts head prediction tasks on models. Then, it can produce a differential score that reflects the quality of KGs by comparing the proximity of output results. To validate the effectiveness of this approach, we experiment with four open-sourced knowledge graphs. The experiment results show that our approach is capable of accurately evaluating the quality of KGs and producing reliable results on different datasets. Moreover, we compared our method with existing methods and achieved certain advantages. The potential usefulness of our approach sheds light on the development of various KG-driven software systems.