Abstract

Multi-stage decision problems under uncertainty are abundant in process industries. Markov decision process (MDP) is a general mathematical formulation of such problems. Whereas stochastic programming and dynamic programming are the standard methods to solve MDPs, their unwieldy computational requirements limit their usefulness in real applications. Approximate dynamic programming (ADP) combines simulation and function approximation to alleviate the ‘curse-of-dimensionality’ associated with the traditional dynamic programming approach. In this paper, we present the ADP as a viable way to solve MDPs for process control and scheduling problems. We bring forth some key issues for its successful application in these types of problems, including the choice of function approximator and the use of a penalty function to guard against over-extending the value function approximation in the value iteration. Application studies involving a number of well-known control and scheduling problems, including dual control, multiple controller scheduling, and resource constrained project scheduling problems, point to the promising potentials of ADP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call