Abstract
With increasing product complexities in mass customisation in the automotive industry, the downsides of conventional production concepts like flow production get more pronounced. Their inability to deal with cycle time losses adequately opens up possibilities for new concepts like matrix-structured production (MSP). Due to the immanent dynamics of matrix-structured production, control concept like takt binding or control stands are no longer sufficient to achieve near-optimal performance. The application of Reinforcement Learning (RL) to solve this problem emerged in the recent years. In particular, routing and dispatching tasks have been solved by applying RL. As both tasks influence each other's performance, a combined RL approach is developed. Therefore, a car body construction is simulated to test different modelled Markov processes, algorithms, and rewards. The new approach is validated against common heuristics regarding logistic performance and relevant metrics for operating autonomous guided vehicle fleets. For this, RL systems are designed and compared. The combined approach of production control in terms of dispatching jobs and routing autonomous guided vehicles achieved equivalent performance to heuristics. Still, it excelled in fleet operation metrics, like reduced live or deadlocks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.