Applying machine learning (ML) models within recommender systems (RSs) has proven effective in achieving high recommendation accuracy. However, a prevalent drawback is their inherent lack of explicability. Integrating knowledge graphs (KGs) into RSs enhances interpretability by elucidating the reasoning behind specific recommendations. Explainability evaluation in RSs often relies on subjective metrics assessed using qualitative user feedback. Although they offer a robust foundation for evaluating explainability, they are susceptible to confirmation bias. Quantitative metrics for assessing the explainability of KG-driven RSs are currently unavailable. This study, thus, proposes novel metrics for the quantitative evaluation of explanation quality in KG-based RSs. The proposed metric, Max Explainability Score, is based on four evaluation parameters: the number of rules, the probability of the traversal path, the entropy value and the reward due to the chosen traversal paths. User studies employing these metrics offer enhanced validity and better integration potential with future XAI research.