Abstract
The problem of globally optimizing a real valued function is inherently intractable in that no practically useful characterization of the global optimum is available [1]. Nevertheless, the need in practice to find a relative low local minimum hats resulted in considerable research over the last decade to develop algorithms that attempt to find such a low minimum (see the survey on global optimization of Torn and Zilinkas [2]). Two distinct approaches to global optimization have been identified, namely deterministic and stochastic. The methods in the first class implicitly search all of the function domain and are thus guaranteed to find the global optimum. These algorithms are forced to deal with severely restricted classes of functions and are often computationally infeasible because the number of computations required increases exponentially with the dimension of the feasible space. To overcome the inherent difficulties of the guaranteed-accuracy algorithms, much research effort has been devoted to algorithms in which a stochastic element is introduced. This way the deterministic guarantee is relaxed into a confidence measure. A general stochastic algorithm for global unconstrained optimization consists of three major steps: a sampling step, an unconstrained optimization step, and a check of some stopping criterion. This paper is concerned with the modification of such an algorithm so as to extend its application to non-convex constrained global optimization.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.