Abstract

AbstractRecent progress in artificial intelligence (AI) has drawn attention to the technology's transformative potential, including what some see as its prospects for causing large‐scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power‐Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power‐seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that they might obtain it, that this could lead to catastrophe, and that we might build and deploy such systems anyway. The second argument claims that the development of human‐level AI will unlock rapid further progress, culminating in AI systems far more capable than any human — this is the Singularity Hypothesis. Power‐seeking behavior on the part of such systems might be particularly dangerous. We discuss a variety of objections to both arguments and conclude by assessing the state of the debate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call