Algorithm xxxx: HDSDP: Software for Semidefinite Programming
HDSDP is a numerical software solving semidefinite programming problems. The main framework of HDSDP resembles the dual-scaling interior point solver DSDP [Benson and Ye, 2008] and several new features, including a dual method based on the simplified homogeneous self-dual embedding, have been implemented. The embedding technique enhances the stability of the dual method , and several new heuristics and computational techniques are designed to accelerate its convergence. HDSDP aims to show how the dual-scaling algorithm benefits from the self-dual embedding, and it is developed in parallel to DSDP5.8. Numerical experiments over several classical benchmark datasets exhibit its robustness and efficiency, particularly its advantages on SDP instances featuring low-rank structure and sparsity. HDSDP is open-sourced under an MIT license and available at https://github.com/Gwzwpxz/HDSDP .
35
- 10.1145/1356052.1356057
- May 1, 2008
- ACM Transactions on Mathematical Software
175
- 10.1007/s10107-002-0355-5
- Feb 1, 2003
- Mathematical Programming
110
- 10.1016/s0927-0507(05)12008-8
- Jan 1, 2005
- Handbooks in Operations Research and Management Science
22
- 10.1023/a:1013777203597
- Jan 1, 2002
- Computational Optimization and Applications
87
- 10.1137/s1052623495294955
- Nov 1, 1998
- SIAM Journal on Optimization
45
- 10.1080/10556789808805691
- Jan 1, 1998
- Optimization Methods and Software
519
- 10.1145/984622.984630
- Apr 26, 2004
250
- 10.1007/s12532-015-0082-6
- May 5, 2015
- Mathematical Programming Computation
3638
- 10.1145/227683.227684
- Nov 1, 1995
- Journal of the ACM
35
- 10.1080/10556789908805761
- Jan 1, 1999
- Optimization Methods and Software
- Research Article
14
- 10.1016/j.ifacol.2017.08.1569
- Jul 1, 2017
- IFAC PapersOnLine
Fast ADMM for homogeneous self-dual embedding of sparse SDPs
- Research Article
77
- 10.1007/s10107-019-01366-3
- Feb 20, 2019
- Mathematical Programming
We employ chordal decomposition to reformulate a large and sparse semidefinite program (SDP), either in primal or dual standard form, into an equivalent SDP with smaller positive semidefinite (PSD) constraints. In contrast to previous approaches, the decomposed SDP is suitable for the application of first-order operator-splitting methods, enabling the development of efficient and scalable algorithms. In particular, we apply the alternating direction method of multipliers (ADMM) to solve decomposed primal- and dual-standard-form SDPs. Each iteration of such ADMM algorithms requires a projection onto an affine subspace, and a set of projections onto small PSD cones that can be computed in parallel. We also formulate the homogeneous self-dual embedding (HSDE) of a primal-dual pair of decomposed SDPs, and extend a recent ADMM-based algorithm to exploit the structure of our HSDE. The resulting HSDE algorithm has the same leading-order computational cost as those for the primal or dual problems only, with the advantage of being able to identify infeasible problems and produce an infeasibility certificate. All algorithms are implemented in the open-source MATLAB solver CDCS. Numerical experiments on a range of large-scale SDPs demonstrate the computational advantages of the proposed methods compared to common state-of-the-art solvers.
- Book Chapter
2
- 10.1007/978-3-319-97478-1_3
- Jan 1, 2018
Many semidefinite programs (SDPs) arising in practical applications have useful structural properties that can be exploited at the algorithmic level. In this chapter, we review two decomposition frameworks for large-scale SDPs characterized by either chordal aggregate sparsity or partial orthogonality. Chordal aggregate sparsity allows one to decompose the positive semidefinite matrix variable in the SDP, while partial orthogonality enables the decomposition of the affine constraints. The decomposition frameworks are particularly suitable for the application of first-order algorithms. We describe how the decomposition strategies enable one to speed up the iterations of a first-order algorithm, based on the alternating direction method of multipliers, for the solution of the homogeneous self-dual embedding of a primal-dual pair of SDPs. Precisely, we give an overview of two structure-exploiting algorithms for semidefinite programming, which have been implemented in the open-source MATLAB solver CDCS. Numerical experiments on a range of large-scale SDPs demonstrate that the decomposition methods described in this chapter promise significant computational gains.
- Research Article
149
- 10.1109/tsp.2015.2443731
- Jun 2, 2015
- IEEE Transactions on Signal Processing
Convex optimization is a powerful tool for resource allocation and signal processing in wireless networks. As the network density is expected to drastically increase in order to accommodate the exponentially growing mobile data traffic, performance optimization problems are entering a new era characterized by a high dimension and/or a large number of constraints, which poses significant design and computational challenges. In this paper, we present a novel two-stage approach to solve large-scale convex optimization problems for dense wireless cooperative networks, which can effectively detect infeasibility and enjoy modeling flexibility. In the proposed approach, the original large-scale convex problem is transformed into a standard cone programming form in the first stage via matrix stuffing, which only needs to copy the problem parameters such as channel state information (CSI) and quality-of-service (QoS) requirements to the prestored structure of the standard form. The capability of yielding infeasibility certificates and enabling parallel computing is achieved by solving the homogeneous self-dual embedding of the primal-dual pair of the standard form. In the solving stage, the operator splitting method, namely, the alternating direction method of multipliers (ADMM), is adopted to solve the large-scale homogeneous self-dual embedding. Compared with second-order methods, ADMM can solve large-scale problems in parallel with modest accuracy within a reasonable amount of time. Simulation results will demonstrate the speedup, scalability, and reliability of the proposed framework compared with the state-of-the-art modeling frameworks and solvers.
- Research Article
546
- 10.1007/s10957-016-0892-3
- Feb 22, 2016
- Journal of Optimization Theory and Applications
We introduce a first-order method for solving very large convex cone programs. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of requiring more time to reach very high accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available or a certificate of infeasibility or unboundedness otherwise, is parameter free, and the per-iteration cost of the method is the same as applying a splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We describe an open-source implementation, which handles the usual (symmetric) nonnegative, second-order, and semidefinite cones as well as the (non-self-dual) exponential and power cones and their duals. We report numerical results that show speedups over interior-point cone solvers for large problems, and scaling to very large general cone programs.
- Research Article
23
- 10.1109/tac.2018.2886170
- Sep 1, 2019
- IEEE Transactions on Automatic Control
When sum-of-squares (SOS) programs are recast as semidefinite programs (SDPs) using the standard monomial basis, the constraint matrices in the SDP possess a structural property that we call partial orthogonality. In this paper, we leverage partial orthogonality to develop a fast first-order method, based on the alternating direction method of multipliers (ADMM), for the solution of the homogeneous self-dual embedding of SDPs describing SOS programs. Precisely, we show how a “diagonal plus low rank” structure implied by partial orthogonality can be exploited to project efficiently the iterates of a recent ADMM algorithm for generic conic programs onto the set defined by the affine constraints of the SDP. The resulting algorithm, implemented as a new package in the solver CDCS, is tested on a range of large-scale SOS programs arising from constrained polynomial optimization problems and from Lyapunov stability analysis of polynomial dynamical systems. These numerical experiments demonstrate the effectiveness of our approach compared to common state-of-the-art solvers.
- Research Article
81
- 10.1080/10556780008805800
- Dec 3, 1998
- Optimization Methods and Software
How ro initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either the two-phase approach or using the so-called big M technique. In the interior point method, there is a more elegant way to deal with the initialization problem. Viz. the self-dual embedding technique proposed by Ye, Todd and Mizuno [30]. For linear programming this technique makes it possible to identify an optimal solution or conclude the problem to be infeasible/unbounded by solving tts embedded self-dual problem The embedded self-dual problem has a trivial initial solution and has the same stmctare as the original problem. Hence. it eliminates the need to consider the initialization problem at all. In this paper, we extend this approach to solve general conic convex programming, including semidefinite programmng. Since a nonlinear conic convex programming problem may lack the so-called strict complementarity property, it canses diffcairier in identifying solutions for the original problem; based on solutions for the embedded self-dual system. We provide numerous examples from semidefinite programming to illustrate various possibilities which have no analogue in the linear programming case.
- Research Article
4
- 10.1051/ro/2016020
- Feb 8, 2017
- RAIRO - Operations Research
We propose a family of search directions based on primal-dual entropy in the context of interior-point methods for linear optimization. We show that by using entropy-based search directions in the predictor step of a predictor-corrector algorithm together with a homogeneous self-dual embedding, we can achieve the current best iteration complexity bound for linear optimization. Then, we focus on some wide neighborhood algorithms and show that in our family of entropy-based search directions, we can find the best search direction and step size combination by performing a plane search at each iteration. For this purpose, we propose a heuristic plane search algorithm as well as an exact one. Finally, we perform computational experiments to study the performance of entropy-based search directions in wide neighborhoods of the central path, with and without utilizing the plane search algorithms.
- Research Article
5
- 10.1007/s10055-009-0143-0
- Nov 19, 2009
- Virtual Reality
The two main objectives of virtual assembly are: (1) to train assembly-operators through virtual assembly models, and (2) to simultaneously evaluate products for ease-of-assembly. The focus of this paper is on developing computational techniques for virtual assembly of thin deformable beam and plate-like objects. To meet the objectives of virtual assembly, the underlying computational technique must: (1) be carried out at a high frame-rate (>20 frames/second), (2) be accurate (<5% error in deformation and force estimation), (3) be conducive to collision detection, and (4) support rapid design evaluations. We argue in this paper that popular computational techniques such as 3-D finite element analysis, boundary element analysis and classic beam/plate/shell analysis fail to meet these requirements. We therefore propose a new class of dual representation techniques for virtual assembly of thin solids, where the geometry is retained in its full 3-D form, while the underlying physics is dimensionally reduced, delivering: (1) high computational efficiency and accuracy (over 20 frames per second with <1% deformation error), and (2) direct CAD model processing, i.e., the CAD model is not geometrically simplified, and 3-D finite element mesh is not generated. In particular, a small-size stiffness matrix with about 300 degrees of freedom per deformable object is generated directly from a coarse surface triangulation, and its LU-decomposition is then exploited during real-time simulation. The accuracy and efficiency of the proposed method is established through numerical experiments and a case study.
- Research Article
21
- 10.1007/s10107-005-0575-6
- Feb 24, 2005
This paper presents a new and high performance solution method for multistage stochastic convex programming. Stochastic programming is a quantitative tool developed in the field of optimization to cope with the problem of decision-making under uncertainty. Among others, stochastic programming has found many applications in finance, such as asset-liability and bond-portfolio management. However, many stochastic programming applications still remain computationally intractable because of their overwhelming dimensionality. In this paper we propose a new decomposition algorithm for multistage stochastic programming with a convex objective and stochastic recourse matrices, based on the path-following interior point method combined with the homogeneous self-dual embedding technique. Our preliminary numerical experiments show that this approach is very promising in many ways for solving generic multistage stochastic programming, including its superiority in terms of numerical efficiency, as well as the flexibility in testing and analyzing the model.
- Research Article
131
- 10.1007/s12532-010-0020-6
- Nov 20, 2010
- Mathematical Programming Computation
Sparse covariance selection problems can be formulated as log-determinant (log-det) semidefinite programming (SDP) problems with large numbers of linear constraints. Standard primal–dual interior-point methods that are based on solving the Schur complement equation would encounter severe computational bottlenecks if they are applied to solve these SDPs. In this paper, we consider a customized inexact primal–dual path-following interior-point algorithm for solving large scale log-det SDP problems arising from sparse covariance selection problems. Our inexact algorithm solves the large and ill-conditioned linear system of equations in each iteration by a preconditioned iterative solver. By exploiting the structures in sparse covariance selection problems, we are able to design highly effective preconditioners to efficiently solve the large and ill-conditioned linear systems. Numerical experiments on both synthetic and real covariance selection problems show that our algorithm is highly efficient and outperforms other existing algorithms.
- Research Article
23
- 10.1007/s11590-008-0100-y
- Sep 12, 2008
- Optimization Letters
Detecting infeasibility in conic optimization and providing certificates for infeasibility pose a bigger challenge than in the linear case due to the lack of strong duality. In this paper we generalize the approximate Farkas lemma of Todd and Ye (Math Program 81:1–22, 1998) from the linear to the general conic setting, and use it to propose stopping criteria for interior point algorithms using self-dual embedding. The new criteria can identify if the solutions have large norm, thus they give an indication of infeasibility. The modified algorithms enjoy the same complexity bounds as the original ones, without assuming that the problem is feasible. Issues about the practical application of the criteria are also discussed.
- Research Article
4
- 10.1016/j.jspi.2011.03.033
- Apr 8, 2011
- Journal of Statistical Planning and Inference
A semidefinite programming study of the Elfving theorem
- Conference Article
13
- 10.1109/ipdps.2014.121
- May 1, 2014
The semi definite programming (SDP) problem is one of the central problems in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. However, two well-known major bottlenecks, i.e., the generation of the Schur complement matrix (SCM) and its Cholesky factorization, exist in the algorithmic framework of the PDIPM. We have developed a new version of the semi definite programming algorithm parallel version (SDPARA), which is a parallel implementation on multiple CPUs and GPUs for solving extremely large-scale SDP problems with over a million constraints. SDPARA can automatically extract the unique characteristics from an SDP problem and identify the bottleneck. When the generation of the SCM becomes a bottleneck, SDPARA can attain high scalability using a large quantity of CPU cores and some processor affinity and memory interleaving techniques. SDPARA can also perform parallel Cholesky factorization using thousands of GPUs and techniques for overlapping computation and communication if an SDP problem has over two million constraints and Cholesky factorization constitutes a bottleneck. We demonstrate that SDPARA is a high-performance general solver for SDPs in various application fields through numerical experiments conducted on the TSUBAME 2.5 supercomputer, and we solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.713 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.
- Research Article
2
- 10.7916/d8pc38bz
- Jan 1, 2011
Solving optimization problems with sparse or low-rank optimal solutions has been an important topic since the recent emergence of compressed sensing and its matrix extensions such as the matrix rank minimization and robust principal component analysis problems. Compressed sensing enables one to recover a signal or image with fewer observations than the “length” of the signal or image, and thus provides potential breakthroughs in applications where data acquisition is costly. However, the potential impact of compressed sensing cannot be realized without efficient optimization algorithms that can handle extremely large-scale and dense data from real applications. Although the convex relaxations of these problems can be reformulated as either linear programming, second-order cone programming or semidefinite programming problems, the standard methods for solving these relaxations are not applicable because the problems are usually of huge size and contain dense data. In this dissertation, we give efficient algorithms for solving these “sparse” optimization problems and analyze the convergence and iteration complexity properties of these algorithms. Chapter 2 presents algorithms for solving the linearly constrained matrix rank minimization problem. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast and solved as a semidefinite programming problem, such an approach is computationally expensive when the matrices are large. In Chapter 2, we propose fixed-point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 minutes by sampling only 20 percent of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms. In Chapter 3, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving linearly constrained matrix rank minimization problems are reported. Chapters 4 and 5 considers alternating direction type methods for solving composite convex optimization problems. We present in Chapter 4 alternating linearization algorithms that are based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most O(1/e) iterations to obtain an e-optimal solution, while our accelerated (i.e., fast) versions require at most O(1/ 3 ) iterations, with little change in the computational effort required at each iteration. For more general problem, i.e., minimizing the sum of K convex functions, we propose multiple-splitting algorithms for solving them. We propose both basic and accelerated algorithms with O(1/e) and O(1/ 3 ) iteration complexity bounds for obtaining an e-optimal solution. To the best of our knowledge, the complexity results presented in these two chapters are the first ones of this type that have been given for splitting and alternating direction type methods. Numerical results on various applications in sparse and low-rank optimization, including compressed sensing, matrix completion, image deblurring, robust principal component analysis, are reported to demonstrate the efficiency of our methods.
- New
- Research Article
- 10.1145/3773284
- Oct 27, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3772288
- Oct 21, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3770066
- Sep 30, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3748816
- Sep 29, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3765616
- Sep 29, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3759246
- Sep 29, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3759244
- Sep 29, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3765626
- Sep 3, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3757913
- Aug 11, 2025
- ACM Transactions on Mathematical Software
- Research Article
- 10.1145/3759245
- Aug 7, 2025
- ACM Transactions on Mathematical Software
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.