Abstract

Of central importance to the alpha BB algorithm is the calculation of the alpha values that guarantee the convexity of the underestimator. Improvement (reduction) of these values can result in tighter underestimators and thus increase the performance of the algorithm. For instance, it was shown by Wechsung et al. (J Glob Optim 58(3):429–438, 2014) that the emergence of the cluster effect can depend on the magnitude of the alpha values. Motivated by this, we present a refinement method that can improve (reduce) the magnitude of alpha values given by the scaled Gerschgorin method and thus create tighter convex underestimators for the alpha BB algorithm. We apply the new method and compare it with the scaled Gerschgorin on randomly generated interval symmetric matrices as well as interval Hessians taken from test functions. As a measure of comparison, we use the maximal separation distance between the original function and the underestimator. Based on the results obtained, we conclude that the proposed refinement method can significantly reduce the maximal separation distance when compared to the scaled Gerschgorin method. This approach therefore has the potential to improve the performance of the alpha BB algorithm.

Highlights

  • The αBB algorithm [1,2,5,15] is a branch-and-bound algorithm which is based on creating convex underestimators for general twice-continuously differentiable (C2) functions

  • We have presented a refinement method which we use in conjunction with the scaled Gerschgorin method in order to improve the α values needed for the convex underestimator of the deterministic global optimization algorithm αBB

  • The refinement method can be utilized with other available methods for the calculation of the α values

Read more

Summary

Introduction

The αBB algorithm [1,2,5,15] is a branch-and-bound algorithm which is based on creating convex underestimators for general twice-continuously differentiable (C2) functions. A number of methods for the calculation of α values that are rigorously valid, i.e., such that the underestimator is guaranteed to be convex, have been presented in the literature [2,12, 19,20]. It is usual, but not necessary, for a trade-off between tightness of the underestimator and computational cost to exist. 2 we begin by briefly presenting the αBB underestimator for general C2 functions and the scaled Gerschgorin method for calculating α values for the underestimator.

TheBB underestimator and the scaled Gerschgorin method
Haynsworth’s theorem
The refinement algorithm
Results on random symmetric interval matrices
Results on random interval Hessian matrices
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call