Abstract

Base stock and (extended) Kanban policies have been analysed as basic and simple production and inventory policies. On the other hand, advance demand information (ADI) is useful for controlling production and inventory systems and the production control with ADI has been discussed in a decade. In addition, if the order and production are made corresponding to the state of the system, more profits are expected to be achieved. To derive a state-dependent dynamic policy, an approach of Markov decision processes can be used. Deriving an optimal policy is difficult for the large-size problem, however, because of the curse of dimensionality. In this paper, two-phase time aggregation algorithm (Arruda and Fragoso (2015), TA-algorithm) is applied to a two-stage production and inventory system with advance demand information. From observation in numerical examples for the small dimension problem, modification of TA algorithm leads to convergence to better near-optimal policies. Numerical results show effectiveness of the modification and the derived near-optimal policies are compared with base stock and extended Kanban policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call