Designing AI systems with the capacity to explain their behaviour is paramount to enable human oversight, facilitate trust, promote acceptance of technology and, ultimately, empower users and improve their experience. There are, however, several challenges to explainable AI, one of which is the generation and selection of explanations from the causal history of a given event. Causal attribution, among other cognitive processes, has been found to be influenced by socio-cultural factors, which suggests that there could be systematic differences in preferences for AI explanations between communities of users according to their cognitive style and socio-cultural characteristics. In this paper, we investigate the relationship between preferences in the explanations provided by belief-desire-intention AI agents, cognitive style (holistic vs analytical), and socio-cultural factors, such as gender, education, social class, and political and religious beliefs. We found a relationship between explanation preference, cognitive style and various socio-cultural characteristics. Holistic cognitive style is associated with preference for goal explanations while analytic cognitive style is associated with preference for belief explanations. Socio-cultural variables that affect explanation preference are gender, religious beliefs, educational attainment, some fields of education, and political party affiliation.