Abstract

With recent advances in deep reinforcement learning, it is time to take another look at reinforcement learning as an approach for discrete production control. We applied proximal policy optimization (PPO), a recently developed algorithm for deep reinforcement learning, to the stochastic economic lot scheduling problem. The problem involves scheduling manufacturing decisions on a single machine under stochastic demand, and despite its simplicity remains computationally challenging. We implemented two parameterized models for the control policy and value approximation, a linear model and a neural network, and used a modified PPO algorithm to seek the optimal parameter values. Benchmarking against the best known control policy for the test case, in which Paternina-Arboleda and Das (2005) combined a base-stock policy and an older reinforcement learning algorithm, we improved the average cost rate by 2 %. Our approach is more general, as we do not require a priori policy parameters such as base-stock levels, and the entire policy is learned.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.