Proximal-stabilized semidefinite programming
A regularized version of the primal-dual Interior Point Method (IPM) for the solution of Semidefinite Programming Problems (SDPs) is presented in this paper. Leveraging on the proximal point method, a novel Proximal Stabilized Interior Point Method for SDP (PS-SDP-IPM) is introduced. The method is strongly supported by theoretical results concerning its convergence: the worst-case complexity result is established for the inner regularized infeasible inexact IPM solver. The new method demonstrates an increased robustness when dealing with problems characterized by ill-conditioning or linear dependence of the constraints without requiring any kind of pre-processing. Extensive numerical experience is reported to illustrate advantages of the proposed method when compared to the state-of-the-art solver.
Highlights
Semidefinite Programming (SDP), see [40] for a detailed introduction to the area, is a powerful mathematical framework that extends linear programming to optimization problems involving positive semidefinite matrices
While SDP has proven to be a valuable tool in approximating and modelling challenging problems, it is important to be aware of numerical instabilities that can arise during the solution of SDP problems
Understanding and addressing numerical challenges in SDP solvers is crucial for obtaining robust algorithms, especially when dealing with Interior Point Methods (IPM) [26] which are characterized by a severe inherent ill-conditionings of the related linear algebra problems [11, 18]
Summary
Semidefinite Programming (SDP), see [40] for a detailed introduction to the area, is a powerful mathematical framework that extends linear programming to optimization problems involving positive semidefinite matrices. It finds applications in various fields such as control theory, machine learning, combinatorial optimization [2, 39], and quantum information theory [15, 25], to mention just a few. Understanding and addressing numerical challenges in SDP solvers is crucial for obtaining robust algorithms, especially when dealing with Interior Point Methods (IPM) [26] which are characterized by a severe inherent ill-conditionings of the related linear algebra problems [11, 18]. This work contributes to the understanding of the tight interplay between numerical linear algebra techniques and optimization algorithms when developing robust IPMtype SDP solvers able to deal with instances which would otherwise challenge the standard IPM-based solvers
234
- 10.1137/s1052623495296115
- May 1, 1998
- SIAM Journal on Optimization
5402
- 10.1017/cbo9780511840371
- Apr 11, 1991
5
- 10.1016/j.ejor.2023.11.027
- Nov 19, 2023
- European Journal of Operational Research
8
- 10.1137/15m1052007
- Jan 1, 2016
- SIAM Journal on Matrix Analysis and Applications
53
- 10.1088/1751-8121/aab285
- Mar 16, 2018
- Journal of Physics A: Mathematical and Theoretical
2
- Jan 1, 1986
- Oftalmologicheskii zhurnal
2
- 10.48550/arxiv.2001.00216
- Jan 1, 2020
572
- 10.1287/moor.22.1.1
- Feb 1, 1997
- Mathematics of Operations Research
16
- 10.1111/j.1365-2648.1991.tb01770.x
- Jul 1, 1991
- Journal of Advanced Nursing
11
- 10.1007/s10851-019-00916-w
- Oct 19, 2019
- Journal of Mathematical Imaging and Vision
- Research Article
15
- 10.1007/s10957-013-0354-0
- Jun 27, 2013
- Journal of Optimization Theory and Applications
Derivative-Free Optimization (DFO) examines the challenge of minimizing (or maximizing) a function without explicit use of derivative information. Many standard techniques in DFO are based on using model functions to approximate the objective function, and then applying classic optimization methods to the model function. For example, the details behind adapting steepest descent, conjugate gradient, and quasi-Newton methods to DFO have been studied in this manner. In this paper we demonstrate that the proximal point method can also be adapted to DFO. To that end, we provide a derivative-free proximal point (DFPP) method and prove convergence of the method in a general sense. In particular, we give conditions under which the gradient values of the iterates converge to 0, and conditions under which an iterate corresponds to a stationary point of the objective function.
- Conference Article
13
- 10.1109/ipdps.2014.121
- May 1, 2014
The semi definite programming (SDP) problem is one of the central problems in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. However, two well-known major bottlenecks, i.e., the generation of the Schur complement matrix (SCM) and its Cholesky factorization, exist in the algorithmic framework of the PDIPM. We have developed a new version of the semi definite programming algorithm parallel version (SDPARA), which is a parallel implementation on multiple CPUs and GPUs for solving extremely large-scale SDP problems with over a million constraints. SDPARA can automatically extract the unique characteristics from an SDP problem and identify the bottleneck. When the generation of the SCM becomes a bottleneck, SDPARA can attain high scalability using a large quantity of CPU cores and some processor affinity and memory interleaving techniques. SDPARA can also perform parallel Cholesky factorization using thousands of GPUs and techniques for overlapping computation and communication if an SDP problem has over two million constraints and Cholesky factorization constitutes a bottleneck. We demonstrate that SDPARA is a high-performance general solver for SDPs in various application fields through numerical experiments conducted on the TSUBAME 2.5 supercomputer, and we solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.713 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.
- Research Article
68
- 10.1137/18m1230323
- Jan 1, 2019
- SIAM Journal on Optimization
We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal point, and bundle methods. When the modeling approaches we propose are appropriately accurate, the methods enjoy stronger convergence and robustness guarantees than classical approaches, even though the model-based methods typically add little to no computational overhead over stochastic subgradient methods. For example, we show that improved models converge with probability 1 and enjoy optimal asymptotic normality results under weak assumptions; these methods are also adaptive to a natural class of what we term easy optimization problems, achieving linear convergence under appropriate strong growth conditions on the objective. Our substantial experimental investigation shows the advantages of more accurate modeling over standard subgradient methods across many smooth and non-smooth optimization problems.
- Research Article
4
- 10.1016/j.ymssp.2020.106792
- Apr 8, 2020
- Mechanical Systems and Signal Processing
Modal dynamic residual-based model updating through regularized semidefinite programming with facial reduction
- Research Article
15
- 10.1081/nfa-120034119
- Dec 31, 2004
- Numerical Functional Analysis and Optimization
In this paper we introduce a boundary coercive condition on the regularizing function of the proximal point method in Banach spaces, which has a penalization effect and is useful for solving certain variational inequality problems. As its finite dimensional counterpart (proximal point method with Bregman distances) it avoids explicit consideration of the constrains, because the feasibility is taken care of by the regularizing function, whose derivative diverges on the boundary of the feasible set. The algorithm is therefore an interior point one. We show that this effect is guaranteed even for an inexact version of the method, namely the hybrid extragradient proximal point method proposed by Solodov and Svaiter for finite dimensional spaces. We give a full convergence analysis, and examples of regularizing functions satisfying the required conditions, for the cases of the feasible set being a closed ball or a polyhedron.
- Book Chapter
- 10.1007/978-4-431-55060-0_30
- Jan 1, 2014
The semidefinite programming (SDP) problem is one of the central problems in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. However, two well-known major bottlenecks, i.e., the generation of the Schur complement matrix (SCM) and its Cholesky factorization, exist in the algorithmic framework of the PDIPM. We have developed a new version of the semidefinite programming algorithm parallel version (SDPARA), which is a parallel implementation on multiple CPUs and GPUs for solving extremely large-scale SDP problems with over a million constraints. SDPARA can automatically extract the unique characteristics from an SDP problem and identify the bottleneck. When the generation of the SCM becomes a bottleneck, SDPARA can attain high scalability using a large quantity of CPU cores and some processor affinity and memory interleaving techniques. SDPARA can also perform parallel Cholesky factorization using thousands of GPUs and techniques for overlapping computation and communication if an SDP problem has over 2 million constraints and Cholesky factorization constitutes a bottleneck. We demonstrate that SDPARA is a high-performance general solver for SDPs in various application fields through numerical experiments conducted on the TSUBAME 2.5 supercomputer, and we solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.713 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.
- Research Article
56
- 10.1016/j.ijepes.2004.12.001
- Mar 7, 2005
- International Journal of Electrical Power and Energy Systems
A trust region interior point algorithm for optimal power flow problems
- Research Article
34
- 10.1109/tnnls.2013.2275170
- Feb 1, 2014
- IEEE transactions on neural networks and learning systems
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
- Research Article
20
- 10.1080/02331934.2017.1349124
- Jul 11, 2017
- Optimization
Proximal point method is one of the most influential procedure in solving nonlinear variational problems. It has recently been introduced in Hadamard spaces for solving convex optimization, and later for variational inequalities. In this paper, we study the general proximal point method for finding a zero point of a maximal monotone set-valued vector field defined on a Hadamard space and valued in its dual. We also give the relation between the maximality and Minty’s surjectivity condition, which is essential for the proximal point method to be well-defined. By exploring the properties of monotonicity and the surjectivity condition, we were able to show under mild assumptions that the proximal point method converges weakly to a zero point. Additionally, by taking into account the metric subregularity, we obtained the local strong convergence in linear and super-linear rates.
- Book Chapter
3
- 10.1007/978-3-319-42432-3_33
- Jan 1, 2016
In this talk, we present our ongoing research project. The objective of this project is to develop advanced computing and optimization infrastructures for extremely large-scale graphs on post peta-scale supercomputers. We explain our challenge to Graph 500 and Green Graph 500 benchmarks that are designed to measure the performance of a computer system for applications that require irregular memory and network access patterns. The 1st Graph500 list was released in November 2010. The Graph500 benchmark measures the performance of any supercomputer performing a BFS (Breadth-First Search) in terms of traversed edges per second (TEPS). In 2014 and 2015, our project team was a winner of the 8th, 10th, and 11th Graph500 and the 3rd to 6th Green Graph500 benchmarks, respectively. We also present our parallel implementation for large-scale SDP (SemiDefinite Programming) problem. The semidefinite programming (SDP) problem is a predominant problem in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. We solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.774 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs on the TSUBAME 2.5 supercomputer.
- Research Article
5
- 10.1051/ro:1999117
- Oct 1, 1999
- RAIRO - Operations Research
We consider a generalized proximal point method (GPPA) for solving the nonlinear complementarity problem with monotone operators in R' lt differs from the classical proximal point method discussed by Rockafellar for the problem offinding zeroes of monotone operators in the use of generalized distances, called (p-divergences, instead of the Euclidean one. These distances play not only a regularization wie but also a penalization one, forcing the sequence generaled by the method to remain in the interior of the feasible set, so that the method behaves like an interior point one. Under appropriate assumptions on the ip-divergence and the monotone operator we prove that the sequence converges if and only if the problem has solutions, in which case the limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend previous results for the proximal point method concerning convex optimization problems.
- Research Article
136
- 10.1137/s1052623495286302
- Feb 1, 1998
- SIAM Journal on Optimization
We consider a generalized proximal point method for solving variational inequality problems with monotone operators in a Hilbert space. It differs from the classical proximal point method (as discussed by Rockafellar for the problem of finding zeroes of monotone operators) in the use of generalized distances, called Bregman distances, instead of the Euclidean one. These distances play not only a regularization role but also a penalization one, forcing the sequence generated by the method to remain in the interior of the feasible set so that the method becomes an interior point one. Under appropriate assumptions on the Bregman distance and the monotone operator we prove that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend similar previous results for the proximal point method with Bregman distances which dealt only with the finite dimensional case and which applied only to convex optimization problems or to finding zeroes of monotone operators, which are particular cases of variational inequality problems.
- Conference Article
3
- 10.1109/icpes.2011.6156662
- Dec 1, 2011
Power flow or load flow solution is essential for continuous evaluation of the performance of the power systems so that suitable control measures can be taken in case of necessity. Load flow studies are made to plan the best operation and control of the existing system as well as plan the future expansion to keep pace with the load growth. Generally in any power system operation, our main aim is to operate a power system optimally. Optimality can be achieved by minimizing the cost, losses and maintaining voltage profile. So in order to achieve the above conditions that is to operate a system in an optimal way we choose various optimization techniques namely, optimal power flow by Newton method (OPF) and Interior Point (IP) method. In this paper a 5-BUS test system is taken and analyzed using the above mentioned two methods. It is shown that how the system power losses are decreased and voltage profiles are increased after using Interior Point (IP) method in this model. The results are generated for 5-Bus system.
- Research Article
- 10.61467/2007.1558.2024.v15i2.204
- Oct 1, 2024
- International Journal of Combinatorial Optimization Problems and Informatics
This paper is an attempt to exploit the opportunity of Semi Definite Programming (SDP), which is an area of convex and conic optimization. Indeed, Numerous NP-hard problems can be solveby using this approach. Hence, we intend to investigate the strength of SDP to model and provide tight relaxations of combinatorial and quadratic problems in order to present a new polynomial time algorithm for solving this robust model. This algorithm was firstly use to solve the nonlinear programs, the reason for which we search to extend it to the SDP programs. Actually, this algorithm designs the combination of two penalization methods. The first one is a primal-dual interior point (PDIM) method while the second one is a primal-dual exterior point (PDEM) method. Unlike the first method, which converges globally, the second one, also called the primal dual nonlinear rescaling method, has local super linear/quadratic convergence. Therefore, it seems appropriate to use a mixed algorithm based on the interior-exterior point method (IEPM). In fact, this resolution starts from the interior method, and at a certain level of execution, it proceeds to exterior method. Hence, a convergence evaluation function is use to know the level of permutation. Through evaluation, it has been approve that our approach is use to solve some instances of max-cut problem. This problem is a central graph theory model that occurs in many real problems and it is one of many NP-hard problems, which has attracted many researchers over the years. Then, we have used a semi definite programming solver SDPA (Semi Definite Programming Algorithm) that is modify to include the exterior point method subroutine. From the computational performance, we conclude that as the problem size increases, interior-exterior point algorithm gets relatively faster. The numerical results obtained are promising.
- Research Article
9
- 10.1007/s10957-023-02194-4
- Apr 5, 2023
- Journal of Optimization Theory and Applications
In this work, in the context of Linear and convex Quadratic Programming, we consider Primal Dual Regularized Interior Point Methods (PDR-IPMs) in the framework of the Proximal Point Method. The resulting Proximal Stabilized IPM (PS-IPM) is strongly supported by theoretical results concerning convergence and the rate of convergence, and can handle degenerate problems. Moreover, in the second part of this work, we analyse the interactions between the regularization parameters and the computational footprint of the linear algebra routines used to solve the Newton linear systems. In particular, when these systems are solved using an iterative Krylov method, we are able to show—using a new rearrangement of the Schur complement which exploits regularization—that general purposes preconditioners remain attractive for a series of subsequent IPM iterations. Indeed, if on the one hand a series of theoretical results underpin the fact that the approach here presented allows a better re-use of such computed preconditioners, on the other, we show experimentally that such (re)computations are needed only in a fraction of the total IPM iterations. The resulting regularized second order methods, for which low-frequency-update of the preconditioners are allowed, pave the path for an alternative class of second order methods characterized by reduced computational effort.
- New
- Addendum
- 10.1007/s10589-025-00746-0
- Oct 29, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00733-5
- Oct 27, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00731-7
- Oct 4, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00736-2
- Oct 3, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00738-0
- Oct 3, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00735-3
- Oct 1, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00737-1
- Sep 29, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00725-5
- Sep 18, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00726-4
- Sep 6, 2025
- Computational Optimization and Applications
- Research Article
- 10.1007/s10589-025-00715-7
- Aug 13, 2025
- Computational Optimization and Applications
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.