Abstract
The efficiency of metaheuristics depends on parameters. Often this relation is defined by statistical simulation and have many local minima. Therefore, methods of stochastic global optimization are needed to optimize the parameters. The traditional numerical analysis considers optimization algorithms that guarantee some accuracy for all functions to be optimized. This includes the exact algorithms. Limiting the maximal error requires a computational effort that often increases exponentially with the size of the problem [Horst and Pardalos (1995), Handbook of Global Optimization, Kluwer Academic Publisher, Dordrecht/Boston/London]. That limits practical applications. An alternative is the average analysis where the expected error is made as small as possible [Calvin and Zilinskas (2000), JOTA Journal of Optimization Theory and Applications, 106, 297---307]. The average is taken over a set of functions to be optimized. The average analysis is called the Bayesian Approach (BA) [Diaconis (1988), Statistical Decision Theory and Related Topics, Springer-Verlag, Berlin, pp. 163---175, Mockus and Mockus (1987), Theory of Optimal Decision, Vol. 12, Institute of Mathematics and Cybernetics, Akademia Nauk Lithuanian SSR, Vilnius, Lithuania pp. 57---70]. Application of BA to optimization of heuristics is called the Bayesian Heuristic Approach (BHA) [Mockus (2000), A Set of Examples of Global and Discrete Optimization: Application of Bayesian Heuristic Approach, Kluwer Academic Publishers, Dordrecht, ISBN 0-7923-6359-0]. If the global minimum is known then the traditional stopping condition is applied: stop if the distance to the global minimum is within acceptable limits. If the global minimum is not known then the different approach is natural: minimize the average deviation during the fixed time limit because there is no reason to stop before. If the distance from the global minimum is not known the efficiency of method is tested by comparing with average results of some other method. "Pure" Monte Carlo is a good candidate for such comparison because it converges and does not depend on parameters that can be adjusted to a given problem by using some expert knowledge or additional test runs. In this paper a short presentation of the basic ideas of BHA [described in detail in Mockus (2000, A Set of Examples of Global and Discrete Optimization: Application of Bayesian Heuristic Approach, Kluwer Academic Publishers, Dordrecht, ISBN 0-7923-6359-0) and Mockus (1989, Bayesian Approach to Global Optimization, Kluwer Academic Publishers, Dordrec ht-London-Boston)] is given. The simplest knapsack problem is for initial explanation of BHA. The possibilities of application are illustrated by a school scheduling problem and other examples. Designed for distance graduate studies of the theory of games and markets in the Internet environment. All the algorithms are implemented as platform independent Java applets or servlets therefore readers can easily verify and apply the results for studies and for real life heuristic optimization problems. To address this idea, the paper is arranged in a way convenient for the direct reader participation. Therefore, a part of the paper is written as some "user guide". The rest is a short description of optimization algorithms and models. All the remaining information is on web-sites, for example http://pilis.if.ktu.lt/~mockus.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.