Abstract

The problem of safe and fair conflict resolution among inertial, distributed agents—particularly in highly interactive settings—is of paramount importance to the autonomous vehicles industry. The difficulty of solving this problem can be attributed to the fact that agents have to reason over other agents' complex behaviors. We propose the idea of using a behavioral contract to capture a set of explicitly defined assumptions about how all agents in the environment make decisions. In this article, we present a behavioral contract for a specific class of agents that can guarantee the safety and liveness (i.e., progress) of all agents operating in accordance with it. The behavioral contract has two main components—an ordered behavioral rulebook that the agent uses to select its intended action and some additional constraints that define when an agent has precedence (or not) to take its intended action. If all of the agents act according to this contract, we can guarantee safety under all traffic conditions and liveness for all agents under “sparse” traffic conditions. The formalism of the contract also enables assignment of blame. We provide proofs of correctness of the behavioral contract and validate our results in simulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call