Abstract

Ice-based thermal energy storage (TES) system is effective on load shifting and demand response in public buildings under time-of-use (TOU) tariffs. The management and allocation of ice storage and release during the day are vital to cost efficiency and energy performance of the TES system. Currently, fixed-schedule, rule-based, and model predictive control methods are widely used in cost-saving control of ice-based TES system, but may not resolve the issue under detailed virtual environment reflecting cooling load uncertainty and system performance complexity. This study proposes a reinforcement learning (RL) approach for optimal control of ice-based TES systems in commercial buildings. The RL framework is defined according to the TOU tariff, predicted cooling load, and simulated cooling plant performance. Deep Q-network RL agent is tuned and trained for control decision-making with multi-step temporal difference and ε-greedy algorithm. A detailed environment model with heat transfer simulation is developed to train and evaluate the RL agent. The environment model is calibrated with the measured data from a case study of an ice-based TES system. Compared to the fixed-schedule strategy, the proposed RL controller achieves 7.6% of cost reduction in the simulated cooling season of 2020. The RL-based control approach can effectively learn the features and improve the cost efficiency of the ice-based TES system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call