Abstract

Alternating direction methods of multipliers (ADMMs) are popular approaches to handle large scale semidefinite programs that gained attention during the past decade. In this paper, we focus on solving doubly nonnegative programs (DNN), which are semidefinite programs where the elements of the matrix variable are constrained to be nonnegative. Starting from two algorithms already proposed in the literature on conic programming, we introduce two new ADMMs by employing a factorization of the dual variable. It is well known that first order methods are not suitable to compute high precision optimal solutions, however an optimal solution of moderate precision often suffices to get high quality lower bounds on the primal optimal objective function value. We present methods to obtain such bounds by either perturbing the dual objective function value or by constructing a dual feasible solution from a dual approximate optimal solution. Both procedures can be used as a post-processing phase in our ADMMs. Numerical results for DNNs that are relaxations of the stable set problem are presented. They show the impact of using the factorization of the dual variable in order to improve the progress towards the optimal solution within an iteration of the ADMM. This decreases the number of iterations as well as the CPU time to solve the DNN to a given precision. The experiments also demonstrate that within a computationally cheap post-processing, we can compute bounds that are close to the optimal value even if the DNN was solved to moderate precision only. This makes ADMMs applicable also within a branch-and-bound algorithm.

Highlights

  • In a semidefinite program (SDP) one wants to find a positive semidefinite matrix such that linear — in the entries of the matrix — constraints are fulfilled and a linear objective function is minimized

  • Starting from two algorithms already proposed in the literature on conic programming, we introduce two new Alternating direction methods of multipliers (ADMMs) by employing a factorization of the dual variable

  • The experiments demonstrate that within a computationally cheap post-processing, we can compute bounds that are close to the optimal value even if the doubly nonnegative program (DNN) was solved to moderate precision only

Read more

Summary

Introduction

In a semidefinite program (SDP) one wants to find a positive semidefinite (and symmetric) matrix such that linear — in the entries of the matrix — constraints are fulfilled and a linear objective function is minimized. One can directly apply these ADMMs to solve DNNs, too, by introducing nonnegative slack variables for the nonnegativity constraints in order to obtain equality constraints only This increases the size of the problem significantly. In case the DNN is used as relaxation of some combinatorial optimization problem, one is interested in dual bounds, i.e. bounds that are the dual objective function value of a dual feasible solution. First order methods can compute solutions of moderate precision in reasonable time, whereas progressing to higher precision can become expensive To overcome this drawback, we present two methods to compute a dual bound from a solution obtained by the ADMMs within a post-processing phase.

Problem formulation and notations
ADMMs for doubly nonnegative programs
8: Update σ
ConicADMM3c
Dual matrix factorization
DADMM3c
Computation of dual bounds
Dual bounds through error bounds
Dual bounds through the Nightjet procedure
The stable set problem and an SDP relaxation
Comparison of the evolution of the dual bounds
Computational setup
Comparison between ConicADMM3c and DADMM3c
Conclusions
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call