Abstract

The paper is motivated by the importance of the Smart Cities (SC) concept for future management of global urbanization and energy consumption. Multi-agent Reinforcement Learning (RL) is an efficient solution to utilize large amount of sensory data provided by the Internet of Things (IoT) infrastructure of the SCs for city-wide decision making and managing demand response. Conventional ModelFree (MF) and Model-Based (MB) RL algorithms, however, use a fixed reward model to learn the value function rendering their application challenging for ever changing SC environments. Successor Representations (SR)-based techniques are attractive alternatives that address this issue by learning the expected discounted future state occupancy, referred to as the SR, and the immediate reward of each state. SR-based approaches are, however, mainly developed for single agent scenarios and have not yet been extended to multi-agent settings. The paper addresses this gap and proposes the Multi-Agent Adaptive Kalman Filtering-based Successor Representation (MAKF-SR) framework. The proposed framework can adapt quickly to the changes in a multi-agent environment faster than the MF methods and with a lower computational cost compared to MB algorithms. The proposed MAKF-SR is evaluated through a comprehensive set of experiments illustrating superior performance compared to its counterparts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call