Abstract

Stochastic dynamic programming (SDP) is an optimization technique used in the operation of reservoirs for many years. However, being an iterative method requiring considerable computational time, it is important to establish adequate convergence criterion for its most effective use. Based on two previous studies for the optimization of operations in one of the most important multi-reservoir systems in Mexico, this work uses SDP, centred on the interest in the convergence criterion used in the optimization process. In the first trial, following the recommendations in the literature consulted, the difference in the absolute value of two consecutive iterations was taken and compared against a set tolerance value and a discount factor. In the second trial, it was decided to take the squared difference of the two consecutive iterations. In each of the trials, the computational time taken to obtain the optimal operating policy was quantified, along with whether the optimal operating policy was obtained by meeting the convergence criterion or by reaching the maximum number of iterations. With each optimization policy, the operation of the system under study was simulated and four variables were taken as evaluators of the system behaviour. The results showed few differences in the two operation policies but notable differences in the computation time used in the optimization process, as well as in the fulfilment of the convergence criterion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call