Artificial intelligence and machine learning have attracted significant attention in the preparation of landslide susceptibility maps (LSMs) over the years. Achieving considerable success, they frequently face criticism for their opaque nature and the limited capacity to explain and interpret the resulting LSMs. This study uncovers the inherent characteristics of conditioning factors by investigating both local and global driving forces influencing landslide events through the lens of explainable artificial intelligence. To accomplish this, black-box algorithms, including random forest, gradient boosting machines, and extreme gradient boosting, as well as white-box algorithms including logistic regression and decision trees, were employed to generate LSMs. Their internal structures were later illuminated with three global and one local explainable artificial intelligence (XAI) techniques. The results unveiled a significant superiority of black-box algorithms over white-box algorithms, demonstrating an improvement of up to 17 % in overall accuracy and 19 % in the area under the curve (AUC) score. Among them, the gradient boosting machines exhibited the highest performance, achieving an overall accuracy of 87.88 % and an AUC of 0.9382. Global explanation analyses revealed that landslide susceptibility was predominantly influenced by slope, elevation, distance to roads, and lithological units. On the other hand, local interpretations, conducted for three specific landslide cases, disclosed relative variations in the importance of causative factors such as slope and distance to rivers on landslide occurrences. Overall, this study illustrates the potential utility of XAI tools in enhancing the transparency of generated maps and elucidating the underlying causes of specific landslide occurrences.