Abstract

AbstractConcerns around machine learning’s societal impacts have led to proposals to certify some systems. While prominent governance efforts to date center around networking standards bodies such as the Institute of Electrical and Electronics Engineers (IEEE), we argue that machine learning certification should build on structures from the sustainability domain. Policy challenges of machine learning and sustainability share significant structural similarities, including difficult to observe credence properties, such as data collection characteristics or carbon emissions from model training, and value chain concerns, including core‐periphery inequalities, networks of labor, and fragmented and modular value creation. While networking‐style standards typically draw their adoption and enforcement from functional needs to conform to enable network participation, machine learning, despite its digital nature, does not benefit from this dynamic. We therefore apply research on certification systems in sustainability, particularly of commodities, to generate lessons across both areas, informing emerging proposals such as the EU’s AI Act.

Highlights

  • Often colloquially called “algorithms” or even “artificial intelligence” systems, are a type of software distinguished by the way that they “learn from experience.”

  • Practitioners and policymakers have been working across many commodities to develop and share best practices, and the spill-overs – between certified and non-certified, between certification systems, and between certification systems and state-based regulations – are all important elements that have been recognized in the sustainability space that are of use to those considering the use of similar concepts in machine learning

  • As the machine learning community considers the role that certification could play in governing connected social impacts, it should be mindful that the closest analogy to the systems they might wish to develop may not be found in the domains of networking, electronics or telecoms, or even Internet regulation, but in sustainability, environmental governance and policy

Read more

Summary

Introduction

Often colloquially called “algorithms” or even “artificial intelligence” systems, are a type of software distinguished by the way that they “learn from experience.” It is said that a machine “learns” if, after being exposed to data, its subjectively measured performance at a given task improves (Mitchell 1997). They have been at the receiving end of a range of recent critiques concerning varied social impacts accompanying their commercial use. Certain aspects of both can be classified as credence qualities – qualities that cannot be assessed, even after use (such as the impacts that the production of a particular food stuff, such as coffee, had on the environment, compared to the fundamental rights issues in collecting the data to build a model). This is important in a domain where trade secrets and NDAs typically restrict a thorough external analysis of machine learning products or practices within a particular sector or company

Value chains illuminate underexplored governance issues in machine learning
Proposed private governance of machine learning
Learning from sustainability certification systems
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.