Abstract

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.

Highlights

  • ALGORITHMIC OPACITYOpacity of machine learning algorithms as a grand challenge In contrast to the global proliferation and societal penetration of earlier technologies, such as the car, electricity or the telephone, modern algorithmic decision systems come with a special kind of opacity: Machine learning algorithms are not a set of rules defined by programmers, but by algorithmically produced rules of learning: “The internal decision logic of the algorithm is altered as it ‘learns’ on training data” (Burrell, 2016, p. 5)

  • Organizations increasingly delegate agency to artificial intelligence

  • We argue that the grand challenge of algorithmic opacity points towards the necessity to extend the extant approach to robust action with a discussion of principles that can serve to evaluate organisational approaches of tackling grand challenges regarding their ability to foster novel, but legitimate, outcomes under the intricate conditions of algorithmic opacity

Read more

Summary

ALGORITHMIC OPACITY

Opacity of machine learning algorithms as a grand challenge In contrast to the global proliferation and societal penetration of earlier technologies, such as the car, electricity or the telephone, modern algorithmic decision systems come with a special kind of opacity: Machine learning algorithms are not a set of rules defined by programmers, but by algorithmically produced rules of learning: “The internal decision logic of the algorithm is altered as it ‘learns’ on training data” (Burrell, 2016, p. 5). It seems to be increasingly important that they elude access for technical and procedural reasons: First, they are based, in part, on structurally inaccessible and incomprehensible procedures—not to the public, and to the organisations that own and employ them, and even to specialists (Ananny, 2016; Burrell, 2016) They are highly fluid technologies that evolve only in the ‘field’ (Sandvig et al, 2016). There is no straightforward way to address poorly transparent and highly fluid algorithmic processes and organisations cannot deliver accounts for these technologies (Buhmann et al, 2020) They need to be addressed in a participative and discursive process together with their stakeholders; they need the ‘pragmatic treatment’ that Ferraro et al (2015) proposed for other grand challenges

ACTION STRATEGIES
FOR OPAQUE ALGORITHMS
DISTRIBUTED EXPERIMENTATION
DISCUSSION
ORGANISATIONAL LEGITIMATION
LEGITIMATE NOVELTY
COMMUNICATIVE ENGAGEMENT
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.