Abstract

This paper presents a multi-agent Double Deep Q Network (DDQN) based on deep reinforcement learning for solving the transmission network expansion planning (TNEP) of a high-penetration renewable energy source (RES) system considering uncertainty. First, a K-means algorithm that enhances the extraction quality of variable wind and load power uncertain characteristics is proposed. Its clustering objective function considers the cumulation and change rate of operation data. Then, based on the typical scenarios, we build a bi-level TNEP model that includes comprehensive cost, electrical betweenness, wind curtailment and load shedding to evaluate the stability and economy of the network. Finally, we propose a multi-agent DDQN that predicts the construction value of each line through interaction with the TNEP model, and then optimizes the line construction sequence. This training mechanism is more traceable and interpretable than the heuristic-based methods. Simultaneously, the experience reuse characteristic of multi-agent DDQN can be implemented in multi-scenario TNEP tasks without repeated training. Simulation results obtained in the modified IEEE 24-bus system and New England 39-bus system verify the effectiveness of the proposed method.

Highlights

  • IntroductionCountries have actively implemented Nationally Determined Contributions (NDCs) to alleviate climate deterioration in recent years, global greenhouse gas emissions are still in the process of continuous growth, and there has not yet been a peak phenomenon

  • Countries have actively implemented Nationally Determined Contributions (NDCs) to alleviate climate deterioration in recent years, global greenhouse gas emissions are still in the process of continuous growth, and there has not yet been a peak phenomenon.In order to control the future temperature rise within 1.5 ◦ C, the United Nations Environment Programme advocates that countries around the world should reduce the emissions to fill the gap between the current greenhouse gas emissions level and the Paris Agreement provisions [1]

  • This paper proposes a multi-agent Double Deep Q Network (DDQN) for the transmission network expansion planning (TNEP) task considering the uncertainties of wind power and load

Read more

Summary

Introduction

Countries have actively implemented Nationally Determined Contributions (NDCs) to alleviate climate deterioration in recent years, global greenhouse gas emissions are still in the process of continuous growth, and there has not yet been a peak phenomenon. This paper will use the system operation data to construct the uncertain model of renewable energy and variable load to assist the TNEP task solving, and use the data compression method to maintain the efficiency of the calculation while retaining the main characteristics of the uncertain model. The construction of the TNEP model of the system with high-penetration of RES should fully consider the uncertain characteristics of renewable energy and variable loads, and improve the stability of system in the most economical way. When undertaking the TNEP tasks for the system with high-penetration of RES, the deep reinforcement learning environment should contain more uncertain characters of wind power and variable load.

TNEP Bi-Level Model Based on Typical Scenarios of Wind Power and Load
Bi-Level Multi-Objective TNEP Model
Upper-Level Objective Function
Upper-Level Constraints
Lower-Level Objective Function
G G wind wind
Lower Constrains
Multi-Agent DDQN for Transmission Network Expand Planning
Multi-Agent DDQN Structure
Modified IEEE RTS-24 Bus System with High-Penetration RES
Typical
TNEP in Modified New England 39-Bus System
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.