Abstract

One approach to process design with uncertain parameters is to formulate a stochastic MINLP. When there are many uncertain parameters, the number of samples becomes unmanageably large and computing the solution to the MINLP can be difficult and very time consuming. In this paper, two new algorithms (the optimality gap method (OGM) and the confidence level method (CLM)) are presented for solving convex stochastic MINLPs. At each iteration, the sample average approximation method is applied to the NLP sub-problem and MILP master problem. A smaller sample size problem is solved multiple times with different batches of i.i.d. samples to make decisions and a larger sample size problem (with continuous/discrete decision variables fixed) is solved to re-evaluate the objective values. In the first algorithm, the sample sizes are iteratively increased until the optimality gap intervals of the upper and lower bound are within a pre-specified tolerance. Instead of requiring a small optimality gap, the second algorithm uses tight bounds for comparing the objective values of NLP sub-problems and weak bounds for cutting off solutions in the MILP master problems, hence the confidence of finding the optimal discrete solution can be adjusted by the parameter used to tighten and weaken the bounds. The case studies show that the algorithms can significantly reduce the computational time required to find a solution with a given degree of confidence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call