Abstract
District cooling system (DCS), a type of large-capacity air conditioning system that supplies cooling for multiple buildings, is an ideal resource to provide frequency regulation services for power systems. In order to provide high-quality services and maximize DCS’s revenue from the electricity market, an accurate estimation of DCS’s regulation capacity is indispensable. Inaccurate regulation capacity estimation may lead to unsatisfactory cooling supply for buildings and/or poor regulation service quality that may be penalized by the market. However, estimating a DCS’s regulation capacity is quite challenging, because a DCS usually has complex thermal dynamics to model and its cooling demands and regulation signals are usually highly stochastic. To address the above challenges, this paper proposes a DCS regulation capacity offering strategy based on deep reinforcement learning. It is model-free and can effectively tackle various uncertainties. Furthermore, considering that the training process of DRL needs lots of “trial and errors," which may harm the actual physical system by making “bad" decisions. We propose a novel intrinsic-motivated method based on pseudo-count to improve the efficiency of the training. Numerical studies based on a realistic DCS system illustrate the effectiveness of the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.