Abstract Demand-Driven Material Requirements Planning (DDMRP) is an emerging inventory management approach that has garnered significant attention from academia and industry. Numerous recent studies have highlighted the advantages of DDMRP compared to traditional methods such as material requirement planning (MRP), Theory of constraint (TOC), and Kanban. However, the performance of DDMRP relies on several parameters that affect its effectiveness. Parameterization models and the optimization of control variables have significantly contributed to the field of inventory management and have proven to be effective and practical in addressing challenges by providing a structured approach to handling complex variables and constraints. This paper introduces an innovative parameterization model that leverages deep reinforcement learning (DRL) to parameterize a DDMRP system in the face of uncertain demand. The main objective is to dynamically determine the optimal values for the variability and lead time factors within the DDMRP framework, to maximize customer service levels and optimize inventory efficiency. The results of this study emphasize the effectiveness of DRL as an automated decision-making approach for controlling DDMRP parameters. Additionally, the findings highlight the potential for enhancing the performance of the DDMRP approach, particularly in terms of on-time delivery (OTD) and average on-hand inventory (AOHI) by adjusting the variability and lead-time factors.
Read full abstract