Abstract

The demand response (DR) plays a significant role in manufacturing system energy management and sustainable industrial development. Current literature on DR management for manufacturing systems has mostly focused on day-ahead production scheduling, whose effectiveness is limited due to the lack of flexibility to control the production line in real time. The development of reinforcement learning (RL) possesses huge potential for real-time production control to address the flexibility issue. However, since production is the top priority for any manufacturing system, a trustful and explainable RL for manufacturing system energy management that can ensure the production requirements is necessary for this application. This study proposes an explainable multi-agent deep RL method, where the analytical manufacturing system model is applied to decompose the system-level energy management objective and production requirement to the agent level. Based on the decomposed task, the agent can then form a safe action subset that is interpretable to achieve the original system-level production requirement while learning to reduce energy costs under DR. The proposed RL method, which is referred to as decomposed multi-agent deep Q-network (DMADQN), is applied to control a section of an automotive assembly line using one year of DR electricity price data to validate its performance. Results show that the proposed method ensures the achievement of the production requirement while providing better DR energy management performance in both RL training and testing phases. In addition, the proposed approach can outperform the day-ahead scheduling approach and save up to an additional 30.7% of energy costs under dynamic DR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call