Abstract

Load forecasting has long been a key task for reliable power systems planning and operation. Over the recent years, advanced metering infrastructure has proliferated in industry. This has given rise to many load forecasting methods based on frequent measurements of power states obtained by smart meters. Meanwhile, real-world constraints arising in this new setting present both challenges and opportunities to achieve high load forecastability. The bandwidth constraints often imposed on the transmission between data concentrators and utilities are one of them, which limit the amount of data that can be sampled from customers. There lacks a sampling-rate control policy that is self-adaptive to users’ load behaviors through online data interaction with the smart grid environment. In this paper, we formulate the bandwidth-constrained sampling-rate control problem as a Markov decision process (MDP) and provide a reinforcement learning (RL)-based algorithm to solve the MDP for an optimal sampling-rate control policy. The resulting policy can be updated in real time to accommodate volatile load behaviors observed in the smart grid. Numerical experiments show that the proposed RL-based algorithm outperforms competing algorithms and delivers superior predictive performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call