Abstract

Multi-agent Plan Recognition (MPAR) infers teams and their goals from observed actions of individual agents. The complexity of creating a priori plan libraries significantly increases to account for diversity of action sequences different team structures may exhibit. A key challenge in MPAR is effectively pruning the joint search space of agent to team compositions and goal to team assignments. Here, we describe discrete Multi-agent Plan Recognition as Planning (MAPRAP), which extends Ramirez and Geffner’s Plan Recognition as Planning (PRAP) approach to multi-agent domains. Instead of a plan library, MAPRAP uses the planning domain and synthesizes plans to achieve hypothesized goals with additional constraints for suspected team composition and previous observations. By comparing costs of plans, MAPRAP identifies feasible interpretations that explain the teams and plans observed. We establish a performance profile for discrete MAPRAP in a multi-agent blocks-world domain. We evaluated precision, accuracy, and recall after each observation. We compare two pruning strategies to dampen the explosion of hypotheses tested. Aggressive pruning averages 1.05 plans synthesized per goal per time step for multi-agent scenarios vice 0.56 for single agent scenarios.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.