Abstract
Abstract Previous studies of the real-time scheduling (RTS) problem domain indicate that using a multiple dispatching rules (MDRs) strategy for the various zones in the system can enhance the production performance to a greater extent than using a single dispatching rule (SDR) over a given scheduling interval for all the machines in the shop floor control system. This approach is feasible but the drawback of the previously proposed MDRs method is its inability to respond to changes in the shop floor environment. The RTS knowledge base (KB) is not static, so it would be useful to establish a procedure that maintains the KB incrementally if important changes occur in the manufacturing system. To address this issue, we propose reinforcement learning (RL)-based RTS using the MDRs mechanism by incorporating two main mechanisms: (1) an off-line learning module and (2) a Q-learning-based RL module. According to various performance criteria over a long period, the proposed approach performs better than the previously proposed MDRs method, the machine learning-based RTS using the SDR approach, and heuristic individual dispatching rules.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.