Abstract

The potential emergence of artificial general intelligence (AGI) systems has sparked intense debate among researchers, policymakers, and the public due to their potential to surpass human intelligence in all domains. This note argues that for an AI to be considered “general”, it should achieve superhuman performance not only in zero-sum games but also in general-sum games, where winning or losing is not clearly defined. In this note, I propose a game-theoretic framework that captures the strategic interactions between a representative human agent and a potential superhuman machine agent. Four assumptions underpin this framework: Superhuman Machine, Machine Strategy, Rationality, and Strategic Unpredictability. The main result is an impossibility theorem, establishing that these assumptions are inconsistent when taken together, but relaxing any one of them results in a consistent set of assumptions. This note contributes to a better understanding of the theoretical context that can shape the development of superhuman AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call