Abstract

Binary optimization, a representative subclass of discrete optimization, plays an important role in mathematical optimization and has various applications in computer vision and machine learning. Generally speaking, binary optimization problems are NP-hard and difficult to solve due to the binary constraints, especially when the number of variables is very large. Existing methods often suffer from high computational costs or large accumulated quantization errors, or are only designed for specific tasks. In this paper, we propose an efficient algorithm, named Discrete Principal Coordinate Descent (DPCD), to find effective approximate solutions for general binary optimization problems. The proposed algorithm iteratively solves optimization problems related to the linear approximation of loss functions, which leads to updating the binary variables that most impact the value of the loss functions at each step. Our method supports a wide range of empirical objective functions with/without restrictions on the numbers of 1s and -1s in the binary variables. Furthermore, the theoretical convergence of our algorithm is proven, and the explicit convergence rates are derived for objective functions with Lipschitz continuous gradients, which are commonly adopted in practice. Extensive experiments on binary hashing tasks and large-scale datasets demonstrate the superiority of the proposed algorithm over several state-of-the-art methods in terms of both effectiveness and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call