Abstract

Existing reinforcement learning (RL) methods have limited applicability to real-world industrial control problems because of their various constraints. To overcome this challenge, in this article, we devise a novel RL method to enable the optimization of a policy while strictly satisfying the system constraints. By leveraging a value-based RL approach, our proposed method is not limited by the challenges faced when searching a constrained policy. Our method has two main features. First, we devise two distance-based Q-value update schemes, incentive and penalty updates, which enable the agent to decide on controls in the feasible region by replacing an infeasible control with the nearest feasible continuous control. The proposed update schemes can adjust the values of both continuous and original infeasible controls. Second, we define the penalty cost as a shadow price-weighted penalty to achieve efficient, constrained policy learning. We apply our method to the microgrid control, and the case study demonstrates its superiority.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call