Abstract

District cooling system (DCS), a type of large-capacity air conditioning system that supplies cooling for multiple buildings, is an ideal resource to provide frequency regulation services for power systems. In order to provide high-quality services and maximize DCS’s revenue from the electricity market, an accurate estimation of DCS’s regulation capacity is indispensable. Inaccurate regulation capacity estimation may lead to unsatisfactory cooling supply for buildings and/or poor regulation service quality that may be penalized by the market. However, estimating a DCS’s regulation capacity is quite challenging, because a DCS usually has complex thermal dynamics to model and its cooling demands and regulation signals are usually highly stochastic. To address the above challenges, this paper proposes a DCS regulation capacity offering strategy based on deep reinforcement learning. It is model-free and can effectively tackle various uncertainties. Furthermore, considering that the training process of DRL needs lots of “trial and errors," which may harm the actual physical system by making “bad" decisions. We propose a novel intrinsic-motivated method based on pseudo-count to improve the efficiency of the training. Numerical studies based on a realistic DCS system illustrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call