Abstract

The ability to say “no” in a variety of ways and contexts is an essential part of being sociocognitively human. Rebel agents are artificially intelligent agents that can refuse assigned goals and plans, or oppose the behavior or attitudes of other agents. Rebel agents can serve purposes such as ethics, safety, task execution correctness, and providing or supporting diverse points of view. Through several examples, we show that, despite ominous portrayals in science fiction, such AI agents with human‐inspired noncompliance abilities have many potential benefits. We present a framework to help categorize and design rebel agents, discuss their social and ethical implications, and assess their potential benefits and the risks they may pose. In recognition of the fact that, in human psychology, noncompliance has profound sociocognitive implications, we also explore sociocognitive dimensions of AI rebellion: social awareness and counternarrative intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call