Abstract

The problem of globally optimizing a real valued function is inherently intractable in that no practically useful characterization of the global optimum is available [1]. Nevertheless, the need in practice to find a relative low local minimum hats resulted in considerable research over the last decade to develop algorithms that attempt to find such a low minimum (see the survey on global optimization of Torn and Zilinkas [2]). Two distinct approaches to global optimization have been identified, namely deterministic and stochastic. The methods in the first class implicitly search all of the function domain and are thus guaranteed to find the global optimum. These algorithms are forced to deal with severely restricted classes of functions and are often computationally infeasible because the number of computations required increases exponentially with the dimension of the feasible space. To overcome the inherent difficulties of the guaranteed-accuracy algorithms, much research effort has been devoted to algorithms in which a stochastic element is introduced. This way the deterministic guarantee is relaxed into a confidence measure. A general stochastic algorithm for global unconstrained optimization consists of three major steps: a sampling step, an unconstrained optimization step, and a check of some stopping criterion. This paper is concerned with the modification of such an algorithm so as to extend its application to non-convex constrained global optimization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call