Abstract

Traditional automatic tuning systems are based on an exploration-exploitation tradeoff that consists of: learning the behavior of the algorithm to tune on several benchmarks (exploration) and then using the learned behavior for solving new problem instances. On NP-hard algorithms, this vision is questionable because of the potential huge runtime of the exploration phase. In this paper, we introduce QTuning, a new automatic tuning system specially designed for NP-hard algorithms. Like traditional tuning systems, QTuning uses benchmarks. But, during the learning process, new benchmark entries can always be introduced or existing ones removed. Moreover, the system mixes the exploration and exploitation phases. The main contribution of this paper is to formulate the learning process in QTuning within an active learning framework. The framework is based on a classical observation made in optimization: namely, the efficiency of random search in regret minimization. We improve our random search algorithm in including a machine learning classification approach and a set intersection problem. Finally, we discuss the experimental evaluation of the framework for the resolution of the satisfiability problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call