Abstract

There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress.

Highlights

  • The problem of how to prevent a global catastrophe associated with the expected development of AI of above human-level intelligence is often characterized as “AI safety” [1]

  • We suggest a different classification of AI safety solutions, as local or global, and describe only global solutions

  • Strategic advantage achieved by narrow AIs produces global unification, before the rise of superintelligent AI, by leveraging preexisting advantage of a nuclear power and increasing first-strike capability First AI is created by a superpower and provides it a decisive strategic advantage First AI is reactive, and while it does not prevent the creation of other AI, it limits their potential danger First AI is a genius at negotiation and solves all conflicts between other agents

Read more

Summary

Introduction

The problem of how to prevent a global catastrophe associated with the expected development of AI of above human-level intelligence is often characterized as “AI safety” [1]. We suggest a different classification of AI safety solutions, as local or global, and describe only global solutions Local solutions are those that affect only one AI, and include AI ethics, AI alignment, AI boxing, etc. This article is based on the assumption—shared by many (e.g., [3,12]), but not all AI researchers—that above human-level AI is possible in the relatively near future (21st century) and the world’s socio-political structure will be approximately the same as at the moment of its creation This assumption about the possibility of superhuman AI is naturally followed by concerns about the safety of such systems, which may generate isolated accidents, but a full variety of possible global catastrophes as explored in Reference [13].

AI Safety Levels
Restriction Solutions
Destructive Solutions
Delay of AI Creation
First AI Seizes World Power
Concentrate the Best AI Researchers to Create a Powerful and Safe AI First
Using the Decisive Advantage of Non-Self-Improving AI to Create an AI Nanny
Risks of Creating Hard-takeoff AI as a Global Solution
AI Nanny Requires a World Government for Its Creation
Levels of Implementation of the AI Nanny Concept
Global Transition into AI
Help Others to Create Safe AI
Selling AI Safety Theory as an Effective Tool to Align Arbitrary AI
Slowing the Appearance of Other AIs
Ways to Affect a Race to Create the First AI
Participating in Acausal Deals with Future AI
How a Net of AIs May Provide Global Safety
The Importance of Number in the Net of AIs
Openness in AI Development
Instruments to Make the Net of AIs Safer
Starting Many AIs Simultaneously
Different Ways to Incorporate Humans inside AI
Even Unfriendly AI Will Preserve Some Humans or Information about Humans
Which Local Solutions Are the Best to Get a Stable Global Solution?
Conclusions
Disclaimer
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call