Abstract

This study delves into the application of deep reinforcement learning (DRL) frameworks for optimizing setpoints in district heating systems, which experience hourly fluctuations in air temperature, customer demand, and fuel prices. The potential for energy conservation and cost reduction through setpoint optimization, involving adjustments to supply temperature and thermal energy storage utilization, is significant. However, the inherent nonlinear complexities of the system render conventional manual methods ineffective. To address these challenges, we introduce a novel learning framework with an expert knowledge module tailored for DRL techniques. The framework leverages system status information to facilitate learning. The training is performed by employing model-free DRL methods and a refined digital twin of the Espoo district heating system. The expert module, accounting for power plant capacities, ensures actionable directives aligned with operational feasibility. Empirical validation through comprehensive simulations demonstrates the efficacy of the proposed approach. Comparative analyses against manual methods and evolutionary techniques highlight the approach's superior ability to curtail fuel costs. This study advances the understanding of DRL in district heating optimization, offering a promising avenue for enhanced energy efficiency and cost savings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call