Abstract

Cellular technology with long-term evolution (LTE)-based standards is a preferable choice for smart grid neighborhood area networks due to its high availability and scalability. However, the integration of cellular networks and smart grid communications puts forth a significant challenge due to the simultaneous transmission of real-time smart grid data which could cause radio access network (RAN) congestions. Heterogeneous cellular networks (HetNets) have been proposed to improve the performance of LTE because HetNets can alleviate RAN congestions by off-loading access attempts from a macrocell to small cells. In this paper, we study energy efficiency and delay problems in HetNets for transmitting smart grid data with different delay requirements. We propose a distributed channel access and power control scheme, and develop a learning-based approach for the phasor measurement units (PMUs) to transmit data successfully by considering interference and signal-to-interference-plus-noise ratio (SINR) constraints. In particular, we exploit a deep reinforcement learning(DRL)-based method to train the PMUs to learn an optimal policy that maximizes the earned reward of successful transmissions without having knowledge on the system dynamics. Results show that the DRL approach obtains good performance without knowing the system dynamic beforehand and outperforms the Gittin index policy in different normal ratios, minimum SINR requirements and number of users in the cell.

Highlights

  • Smart grids have attracted a lot of attention due to their potential to significantly improve the efficiency and reliability of power grids [1]

  • The goal of the Deep reinforcement learning (DRL) approach is to ensure that no phasor measurement units (PMUs) receives signal-to-interference-plus-noise ratio (SINR) falls below the threshold γimin, for successful transmissions, γi ≥ γimin, ∀i ∈ I and the interference caused by PMUs, hiepi(zi), is not greater than an interference threshold Iuth, hiepi(zi) ≤ Iuth, ∀u ∈ U, to protect the QoS of macrocell users (MUEs)

  • A DRL approach was exploited to obtain optimal policy that maximizes the discounted reward and enable successful data transmission with the considerations on the minimum SINR requirements of PMUs and the interference caused by PMUs to the MUEs and other small cell users (SUEs)

Read more

Summary

INTRODUCTION

Smart grids have attracted a lot of attention due to their potential to significantly improve the efficiency and reliability of power grids [1]. DRL has the ability to deal with high-dimensional and large system states such as in HetNets [20], [21] Based on these reasons, DRL approach is exploited to train PMUs to access channel and regulate its power in order to achieve maximum energy efficiency and satisfy delay constraints in distributed manner by extracting inputs from the environment. In order to maximize energy efficiency and meet delay constraints of the PMUs in HetNets, we exploit an intelligent channel access and power control scheme by taking into account the differentiated delay requirements of the PMUs using a DRL approach. Regardless of this greedy behavior, it is important for the PMUs to adapt to the environmental changes as energy efficiency is highly dependent on environmental factors such, as MUEs’ behavior and QoS requirements [27]

SIGNAL-TO-INTERFERENCE-PLUS-NOISE RATIO AND DATA RATE OF THE PMUS
QUEUE DYNAMICS OF PMUS
DELAY AND ENERGY EFFICIENCY MODEL
A PROPOSED DEEP REINFORCEMENT LEARNING
MARKOV DECISION PROCESS ELEMENTS
Q-LEARNING FOR PMU
14: Output
SIMULATION RESULTS AND DISCUSSION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call