Abstract

Rapid product design updates, unstable supply chains, and erratic demand phenomena are challenging current production modes. Reconfigurable manufacturing systems (RMS) aim to provide a cost-effective solution for responding to these challenges. However, given their complex adjustable nature, RMSs cannot fully unlock their potential by applying old-fashion fixed dispatching rules. Reinforcement learning (RL) algorithms offer a useful approach for finding optimal solutions in such complex systems. This paper presents a framework to train a scheduling agent based on a proximal policy optimisation (PPO) algorithm. The results of a numerical case study that implemented the framework on a simplified RMS model, suggest a good level of robustness and reveal areas of unpredictable behaviour that could be the focus of further research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call