Abstract

As many real-world optimization problems are large-scale and expensive, the large search space and expensive gradient computation may lead to failure of metaheuristic and classical algorithms. The problem even gets more crucial as we move from continuous domain to the discrete or mixed type one, because most of the discrete optimization problems are NP-hard and cannot be treated as convex or linear optimization, therefore there exists no cost-effective algorithm to cope with large-scale discrete global optimization (LSDGO) problems. However, due to the low memory demand and computational cost of coordinate descent (CD) search methods they are appropriate algorithms for optimizing large-scale expensive problems. In this paper, we propose a discrete version of CD algorithm called Discrete Coordinate Descent (DCD) as an effective method for solving LSDGO problems. Our proposed algorithm makes the most of two essential phases referred to as finding the region of interest and folding the search space, which shrinks it into two halves per variable and results in ${\left( {\frac{1}{2}} \right)^D}$ shrinking of the whole search space at each iteration (D indicates the problem’s dimension). Since the proposed algorithm shrinks the search space rapidly, it requires a low computational budget to find the optimal value for each coordinate. In order to investigate the efficiency of our algorithm precisely, we tested it on 20 well-known large-scale problems with dimensions of 30, 50, 100, and 1000. The results demonstrate the potency of DCD not only in low-scale discrete problems, but in large-scale discrete optimization problems as well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call