With the increasing penetration of distributed energy resources in distribution networks, Volt-VAR control and optimization (VVC/VVO) have become very important to ensure an acceptable quality of service to all customers. System operators can rely on slow-responding utility devices, including capacitor banks and on-load tap changing transformers, along with fast-responding battery and photovoltaic (PV) inverters for the VVC/VVO implementation. Because of variations in response time of these two classes of devices, and different control actions (discrete versus continuous), coordinated and optimal scheduling and operation have become of utmost importance. This paper develops a look-ahead deep reinforcement learning (DRL)-based multi-objective VVO technique to improve the voltage profile of active distribution networks, decrease network and inverter power loss, and save the operational cost of the grid. It proposes a deep deterministic policy gradient (DDPG)-based approach to schedule the optimal reactive and/or active power set-points of fast-responding inverters, and a deep Q-network (DQN)-based DRL agent to schedule the discrete decisions variables of slow-responding assets. The reactive power output of PV and battery smart inverters are scheduled at 30-minute intervals and the capacitors’ commitment status is scheduled with several hour intervals. The proposed framework is validated on the modified IEEE 34-bus and 123-bus test cases with embedded PV and PV-plus-storage. To validate the efficacy of the proposed VVO, it is compared with several scenarios, including the base case without VVO, localized droop control of DERs, DDPG-only, and twin delayed DDPG (TD3) agent-based DRL techniques. The results justify the superior performance of the proposed method to improve the voltage profile, reduce network power loss, and minimize the look-ahead grid operational cost while minimizing the undesirable power losses in inverters as a result of power factor adjustments.
Read full abstract