Abstract

We define a regularized variant of the dual dynamic programming algorithm called DDP-REG to solve nonlinear dynamic programming equations. We extend the algorithm to solve nonlinear stochastic dynamic programming equations. The corresponding algorithm, called SDDP-REG, can be seen as an extension of a regularization of the stochastic dual dynamic programming (SDDP) algorithm recently introduced which was studied for linear problems only and with less general prox-centers. We show the convergence of DDP-REG and SDDP-REG. We assess the performance of DDP-REG and SDDP-REG on portfolio models with direct transaction and market impact costs. In particular, we propose a risk-neutral portfolio selection model which can be cast as a multistage stochastic second-order cone program. The formulation is motivated by the impact of market impact costs on large portfolio rebalancing operations. Numerical simulations show that DDP-REG is much quicker than DDP on all problem instances considered (up to 184 times quicker than DDP) and that SDDP-REG is quicker on the instances of portfolio selection problems with market impact costs tested and much faster on the instance of risk-neutral multistage stochastic linear program implemented (8.2 times faster).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call