Abstract

We consider the stochastic dual dynamic programming (SDDP) algorithm - a widely employed algorithm applied to multistage stochastic programming - and propose a variant using experience replay - a batch learning technique from reinforcement learning. To connect SDDP with reinforcement learning, we cast SDDP as a Q-learning algorithm and describe its application in both risk-neutral and risk-averse settings. We demonstrate the superiority of the algorithm over conventional SDDP by benchmarking it against PSR's SDDP software using a large-scale instance of the long-term planning problem of inter-connected hydropower plants in Colombia. We find that SDDP with batch learning is able to produce tighter optimality gaps in a shorter amount of time than conventional SDDP. We also find that batch learning improves the parallel efficiency of SDDP backward passes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call