Abstract

ABSTRACT The widespread use of machine learning (ML) models for decision-making raises critical concerns about transparency and accountability – to which an increasingly popular solution is ‘Explainable AI’ (XAI). Here, the object of explanation is technically complex models which are difficult or even impossible to explain. In contrast, this paper makes a call to de-centre models as the object of explanation and look towards the network of ‘machine learning practice’ that bring models into being, and use. We explore this term through an ethnographic study, conducted in collaboration with a large financial services company. Drawing on recent STS research, we ask: what would an explanation look like from a position that recognises the emergent and relational nature of machine learning practice, and how might this contribute to greater accountability and responsibility for ML in use? Inspired by the engaged programme in STS, we explore if and how approaching explanation through ML practice can be mobilised to intervene in how explanations are done in organisations. Our empirical analysis shows an ‘ecology’ of multiple, situated and intra-acting explanations for machine learning practice across a range of human and non-human actors in the company. We argue that while XAI is inevitably partial and limited, its value lies in establishing explanations as an imperative in contexts where ML is implicated in decision-making. Overall, our research suggests a need to widen and deepen the search for explanations and explore the opportunities for provisional, relational and collective interrogations over what can (and can’t) be explained about ML practice.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.