Abstract

Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation. An example of such a problem is maximum clique, which—under standard assumptions in complexity theory—cannot be solved in sub-exponential time or be approximated within the polynomial factor efficiently. However, we show that if a polynomial time algorithm can query informative Gaussian priors from an expert poly(n) times, then a class of combinatorial optimization problems can be solved efficiently up to a multiplicative factor ϵ, where ϵ is arbitrary constant. In this paper, we present proof of our claims and show numerical results to support them. Our methods can cast new light on how to approach optimization problems in domains where even the approximation of the problem is not feasible. Furthermore, the results can help researchers to understand the structures of these problems (or whether these problems have any structure at all!). While the proposed methods can be used to approximate combinatorial problems in NPO, we note that the scope of the problems solvable might well include problems that are provable intractable (problems in EXPTIME).

Highlights

  • Many combinatorial optimization problems are often considered intractable to solve exactly or by approximation

  • We refer to a recent paper [27], where the authors showed that any ratio between expected global optimum and expected optimum found by their Bayesian optimization (BO) algorithms (UCB2 or EI2) can be found in the functions with finite domains in exponential time

  • In Algorithm 1, we showed that any combinatorial problem with a combinatorial complexity of O(2n ), can be reduced to a black-box univariate function with a finite domain

Read more

Summary

Bayesian Optimization

Global optimization (GO) aims to find optimal value(s) of functions, called objective functions, either in finite or bounded domains [14]. After the prior is set over the unknown function, an algorithm suggests points where the optimal values would lie. The suggestions and how they are derived from the prior and posterior are defined by an acquisition function [16], for which multiple different possibilities exist These include upper confidence bound [28], expected improvement [29], and Thompson sampling [30], to name few. The regret [18], which is used to give an idea of the convergence rate of a BO algorithm, is based on the expectations—that is, ratio between the expected global optimum and expected best value found in the function. We show how the (super-)exponential convergence rate can be derived using our human–algorithm collaboration procedure

Combinatorial Problems as Univariate Finite Domain Functions
Human–Algorithm Collaboration in Bayesian Optimization
Results and Discussion
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call