Abstract

In several network problems the optimal behavior of the agents (i.e., the nodes of the network) is not known before deployment. Furthermore, the agents might be required to adapt, i.e. change their behavior based on the environment conditions. In these scenarios, offline optimization is usually costly and inefficient, while online methods might be more suitable. In this work, we use a distributed Embodied Evolution approach to optimize spatially distributed, locally interacting agents by allowing them to exchange their behavior parameters and learn from each other to adapt to a certain task within a given environment. Our results on several test scenarios show that the local exchange of information, performed by means of crossover of behavior parameters with neighbors, allows the network to conduct the optimization process more efficiently than the cases where local interactions are not allowed, even when there are large differences on the optimal behavior parameters within each agent’s neighborhood.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call