Abstract

This article describes Distributed W-Learning (DWL), a reinforcement learning-based algorithm for collaborative agent-based optimization of pervasive systems. DWL supports optimization towards multiple heterogeneous policies and addresses the challenges arising from the heterogeneity of the agents that are charged with implementing them. DWL learns and exploits the dependencies between agents and between policies to improve overall system performance. Instead of always executing the locally-best action, agents learn how their actions affect their immediate neighbors and execute actions suggested by neighboring agents if their importance exceeds the local action's importance when scaled using a predefined or learned collaboration coefficient. We have evaluated DWL in a simulation of an Urban Traffic Control (UTC) system, a canonical example of the large-scale pervasive systems that we are addressing. We show that DWL outperforms widely deployed fixed-time and simple adaptive UTC controllers under a variety of traffic loads and patterns. Our results also confirm that enabling collaboration between agents is beneficial as is the ability for agents to learn the degree to which it is appropriate for them to collaborate. These results suggest that DWL is a suitable basis for optimization in other large-scale systems with similar characteristics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call