Abstract

For a team of collaborative agents navigating through an unknown environment, collaborative actions such as sensing the traversability of a route can have a large impact on aggregate team performance. However, planning over the full space of joint team actions is generally computationally intractable. Furthermore, typically only a small number of collaborative actions is useful for a given team task, but it is not obvious how to assess the usefulness of a given action. In this work, we model collaborative team policies on stochastic graphs using macro-actions, where each macro-action for a given agent can consist of a sequence of movements, sensing actions, and actions of waiting to receive information from other agents. To reduce the number of macro-actions considered during planning, we generate optimistic approximations of candidate future team states, then restrict the planning domain to a small policy class which consists of only macro-actions which are likely to lead to high-reward future team states. We optimize team plans over the small policy class, and demonstrate that the approach enables a team to find policies which actively balance between reducing task-relevant environmental uncertainty and efficiently navigating to goals in toy graph and island road network domains, finding better plans than policies that do not act to reduce environmental uncertainty.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.