Abstract

Transportation systems of the future can be best modeled as multi-agent systems. A number of coordination protocols such as autonomous intersection management (AIM), adaptive cooperative traffic light control (TLC), cooperative adaptive cruise control (CACC), among others have been developed with the goal of improving the safety and efficiency of such systems. The overall goal in these systems is to provide behavioral guarantees under the assumption that the participating agents work in concert with a centralized (or distributed) coordinator. While there is work on analyzing such systems from a security perspective, we argue that there is limited work on quantifying trustworthiness of individual agents in a multi-agent system. We propose a framework that uses an epistemic logic to quantify trustworthiness of agents, and embed the use of quantitative trustworthiness values into control and coordination policies. Our modified control policies can help the multi-agent system improve its safety in the presence of untrustworthy agents (and under certain assumptions, including malicious agents). We empirically show the effectiveness of our proposed trust framework by embedding it into AIM, TLC, and CACC platooning algorithms. In our experiments, our trust framework accurately detects attackers in CACC platoons; mitigates the effect of untrustworthy agents in AIM; and trust-aware TLC and AIM reduce collisions in all cases compared to the vanilla versions of these algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call