Abstract

In debates about artificial intelligence (AI), imaginations often run wild. Policy-makers, opinion leaders, and the public tend to believe that AI is already an immensely powerful universal technology, limitless in its possibilities. However, while machine learning (ML), the principal computer science tool underlying today’s AI breakthroughs, is indeed powerful, ML is fundamentally a form of context-dependent statistical inference and as such has its limits. Specifically, because ML relies on correlations between inputs and outputs or emergent clustering in training data, today’s AI systems can only be applied in well-specified problem domains, still lacking the context-sensitivity of a typical toddler or house-pet. Consequently, instead of constructing policies to govern artificial general intelligence (AGI), decision-makers should focus on the distinctive and powerful problems posed by narrow AI, including misconceived benefits and the distribution of benefits, autonomous weapons, and bias in algorithms. AI governance, at least for now, is less about managing super-intelligent systems than about managing those who would create and deploy them and supporting the application of AI to narrow, well-defined problem domains. Specific implications of our discussion are as follows: • AI applications are part of a suite of intelligent tools and systems, and that ultimately must be regulated as a set. Digital platforms, for example, generate the pools of big data on which AI tools operate and hence, the regulation of digital platforms and of big data is part of the challenge of governing AI. Many of the platform offerings are, in fact, deployments of AI tools. Hence, focusing on AI alone distorts the governance problem. • Simply declaring objectives – be they digital privacy, transparency, or avoiding bias – is not sufficient. We must decide what the goals actually will be in operational terms. • The issues and choices will differ by sector. The consequences, for example, of bias and error will differ from a medical domain or a criminal justice domain to one of retail sales. • The application of AI tools in public policy decision making, in the design of transport or waste disposal or policing, or in a whole variety of domains, requires great care. There is a substantial risk of confusing efficiency with public debate about what the goals should be in the first place. Indeed, public values evolve as part of social and political conflict. • The economic implications of AI applications are easily exaggerated. Should public investment be concentrated on advancing basic research or on the diffusion of tools and the user interfaces and training needed to implement them? As difficult as it will be to decide on goals and a strategy to implement the goals in one community, let alone regional or international communities, any agreement that goes beyond simple statements of hoped for outcomes is very unlikely.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call