Abstract

As the world’s population and economy grow, demand for energy increases as well. Smart grids can be a cost-effective solution to overcome increases in energy demand and ensure power security. Current applications of smart grids involve a large numbers of agents (e.g., electric vehicles). Since each agent must interact with other agents when taking decisions (e.g., movement and scheduling), the computational complexity of smart grid systems increases exponentially with the number of agents. Computational tractability of planning is a significant barrier to implementation of large-scale smart grids of electric vehicles.Existing solution approaches such as mixed-integer programming and dynamic programming are not computationally efficient for high-dimensional problems. This paper proposes a reformulation of a Mixed-Integer Programming model into a Decentralized Markov Decision Process model and solves it using a Multi-Agent Reinforcement Learning algorithm to address the scalability issues of large-scale smart grid systems. The Decentralized Markov Decision Process model uses centralized training and distributed execution: agents are trained using a unique actor network for each agent and a shared critic network, and then agent execute actions independently from other agents to reduce computation time. The performance of the Multi-Agent Reinforcement Learning model is assessed under different configurations of customers and electric vehicles, and compared to the results from deep reinforcement learning and three heuristic algorithms. The simulation results demonstrate that the Multi-Agent Reinforcement Learning algorithm can reduce simulation time significantly compared to deep reinforcement learning, genetic algorithm, particle swarm optimization, and the artificial fish swarm algorithm. The superior performance of the proposed method indicates that it may be a realistic solution for large-scale implementation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.