Abstract
The rapid integration of Renewable Energy Sources (RES) into energy systems is critical for achieving a fossil fuel-free future. However, it also introduces significant challenges, such as increased system complexity and uncertainties in energy output. To overcome these challenges, this paper presents a model-free Deep Reinforcement Learning (DRL) method to energy hub scheduling using state-of-the-art algorithms, specifically Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO). Breaking away from traditional energy management systems that typically use discrete action spaces, this approach employs a multi-dimensional, continuous action space, providing a more accurate framework for decision-making. Furthermore, the current study adopts a comprehensive perspective by considering not only energy supply and operational costs, but also the often-overlooked impact of emitted emissions in the objective function. The results yield a Pareto space encompassing various criteria, offering a diverse scheduling framework that balances economic feasibility with environmental sustainability. By presenting a range of optimal solutions, this study contributes to the development of more resilient and sustainable energy systems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.