Employee turnover (ET) is a major issue faced by firms in all business sectors. Artificial intelligence (AI) machine learning (ML) prediction models can help to classify the likelihood of employees voluntarily departing from employment using historical employee datasets. However, output responses generated by these AI-based ML models lack transparency and interpretability, making it difficult for HR managers to understand the rationale behind the AI predictions. If managers do not understand how and why responses are generated by AI models based on the input datasets, it is unlikely to augment data-driven decision-making and bring value to the organisations. The main purpose of this article is to demonstrate the capability of Local Interpretable Model-Agnostic Explanations (LIME) technique to intuitively explain the ET predictions generated by AI-based ML models for a given employee dataset to HR managers. From a theoretical perspective, we contribute to the International Human Resource Management literature by presenting a conceptual review of AI algorithmic transparency and then discussing its significance to sustain competitive advantage by using the principles of resource-based view theory. We also offer a transparent AI implementation framework using LIME which will provide a useful guide for HR managers to increase the explainability of the AI-based ML models, and therefore mitigate trust issues in data-driven decision-making.