Abstract

The planning process of a factory is of high importance for the future performance of the respective manufacturing system. However, the planning process is characterized by its high complexity, given the large variety of possible layout variants. Recent research has demonstrated that reinforcement learning can be successfully used in the early stage of factory layout planning to generate and optimize a factory layout. However, dynamic aspects in the material flow that influence the manufacturing system's throughput time are usually neglected since only analytic optimization objectives are considered. This paper addresses this research gap by presenting a reinforcement learning-based planning approach that incorporates a discrete event material flow simulation as part of the environment. The primary focus of investigation is the formulation of an appropriate reward function since the reward function has a direct influence on the solution quality. The results of the investigation show that the reinforcement learning approach is able to successfully interact with the simulation and optimize the layout with respect to the total throughput time if the incentives given by the reward function are aligned with the optimization objective. The results are additionally compared to a manual planning approach and the computed optimal solution which demonstrates that the developed approach is a suitable support tool for factory layout planning. Concluding, the approach is applied to an application scenario with 25 functional units to investigate the scalability of the approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call