Abstract

Battery energy storage systems are providing increasing level of benefits to power grid operations by decreasing the resource uncertainty and supporting frequency regulation. Thus, it is crucial to obtain the optimal policy for utility-level battery to efficiently provide these grid-services while accounting for its degradation cost. To solve the optimal battery control (OBC) problem using the powerful reinforcement learning (RL) algorithms, this paper aims to develop a new representation of the cycle-based battery degradation model according to the rainflow algorithm. As the latter depends on the full trajectory, existing work has to rely on linearized approximation for converting it into instantaneous terms for the Markov Decision Process (MDP) based formulation. We propose a new MDP form by introducing additional state variables to keep track of past switching points for determining the cycle depth. The proposed degradation model allows to adopt the powerful deep Q-Network (DQN) based RL algorithm to efficiently search for the OBC policy. Numerical tests using real market data have demonstrated the performance improvements of the proposed cycle-based degradation model in enhancing the battery operations while mitigating its degradation, as compared to earlier work using the linearized approximation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call