Abstract

A general framework for the problem of coordination of multiple competing goals in dynamic environments for physical agents is presented. This approach to goal coordination is a novel tool to incorporate a deep coordination ability to pure reactive agents. The framework presented is based on the notion of multi-objective optimisation. In this article we propose a kind of ‘aggregating functions’ formulation with the particularity that the aggregation is weighted by means of a dynamic weighting unitary vector , which is dependent from the system dynamic state allowing the agent to dynamically coordinate the priorities of its single goals. This dynamic weighting unitary vector is represented as a (n − 1) set of angles. The dynamic coordination must be established by means of a mapping between the state of the agent's environment S to the set of angles Φ i (S) by means of any sort of machine-learning tool. In this work, we investigate the use of Reinforcement Learning as a first approach to learn that mapping.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.