Proximal-stabilized semidefinite programming

  • Abstract
  • Highlights & Summary
  • Literature Map
  • References
  • Citations
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A regularized version of the primal-dual Interior Point Method (IPM) for the solution of Semidefinite Programming Problems (SDPs) is presented in this paper. Leveraging on the proximal point method, a novel Proximal Stabilized Interior Point Method for SDP (PS-SDP-IPM) is introduced. The method is strongly supported by theoretical results concerning its convergence: the worst-case complexity result is established for the inner regularized infeasible inexact IPM solver. The new method demonstrates an increased robustness when dealing with problems characterized by ill-conditioning or linear dependence of the constraints without requiring any kind of pre-processing. Extensive numerical experience is reported to illustrate advantages of the proposed method when compared to the state-of-the-art solver.

Highlights

  • Semidefinite Programming (SDP), see [40] for a detailed introduction to the area, is a powerful mathematical framework that extends linear programming to optimization problems involving positive semidefinite matrices

  • While SDP has proven to be a valuable tool in approximating and modelling challenging problems, it is important to be aware of numerical instabilities that can arise during the solution of SDP problems

  • Understanding and addressing numerical challenges in SDP solvers is crucial for obtaining robust algorithms, especially when dealing with Interior Point Methods (IPM) [26] which are characterized by a severe inherent ill-conditionings of the related linear algebra problems [11, 18]

Read more Highlights Expand/Collapse icon

Summary

IntroductionExpand/Collapse icon

Semidefinite Programming (SDP), see [40] for a detailed introduction to the area, is a powerful mathematical framework that extends linear programming to optimization problems involving positive semidefinite matrices. It finds applications in various fields such as control theory, machine learning, combinatorial optimization [2, 39], and quantum information theory [15, 25], to mention just a few. Understanding and addressing numerical challenges in SDP solvers is crucial for obtaining robust algorithms, especially when dealing with Interior Point Methods (IPM) [26] which are characterized by a severe inherent ill-conditionings of the related linear algebra problems [11, 18]. This work contributes to the understanding of the tight interplay between numerical linear algebra techniques and optimization algorithms when developing robust IPMtype SDP solvers able to deal with instances which would otherwise challenge the standard IPM-based solvers

Motivations and problem statementExpand/Collapse icon
Contribution and related literatureExpand/Collapse icon
NotationExpand/Collapse icon
Proximal point methodExpand/Collapse icon
Follows from observing thatExpand/Collapse icon
Initial pointExpand/Collapse icon
Stopping criteriaExpand/Collapse icon
Predictor correctorExpand/Collapse icon
DatasetExpand/Collapse icon
Numerical resultsExpand/Collapse icon
ConclusionsExpand/Collapse icon
ReferencesShowing 10 of 59 papers
  • Cite Count Icon 234
  • 10.1137/s1052623495296115
On Extending Some Primal--Dual Interior-Point Algorithms From Linear Programming to Semidefinite Programming
  • May 1, 1998
  • SIAM Journal on Optimization
  • Yin Zhang

  • Cite Count Icon 5402
  • 10.1017/cbo9780511840371
Topics in Matrix Analysis
  • Apr 11, 1991
  • Roger A Horn + 1 more

  • Open Access Icon
  • Cite Count Icon 5
  • 10.1016/j.ejor.2023.11.027
A regularized interior point method for sparse optimal transport on graphs
  • Nov 19, 2023
  • European Journal of Operational Research
  • S Cipolla + 2 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 8
  • 10.1137/15m1052007
Bounds for the Distance to the Nearest Correlation Matrix
  • Jan 1, 2016
  • SIAM Journal on Matrix Analysis and Applications
  • Nicholas J Higham + 1 more

  • Open Access Icon
  • Cite Count Icon 53
  • 10.1088/1751-8121/aab285
Efficient optimization of the quantum relative entropy
  • Mar 16, 2018
  • Journal of Physics A: Mathematical and Theoretical
  • Hamza Fawzi + 1 more

  • Cite Count Icon 2
Apitherapy in ophthalmology
  • Jan 1, 1986
  • Oftalmologicheskii zhurnal
  • V P Mozherenkov + 1 more

  • Cite Count Icon 2
  • 10.48550/arxiv.2001.00216
Introduction to Nonsmooth Analysis and Optimization
  • Jan 1, 2020
  • Christian Clason + 1 more

  • Open Access Icon
  • Cite Count Icon 572
  • 10.1287/moor.22.1.1
Self-Scaled Barriers and Interior-Point Methods for Convex Programming
  • Feb 1, 1997
  • Mathematics of Operations Research
  • Yu E Nesterov + 1 more

  • Cite Count Icon 16
  • 10.1111/j.1365-2648.1991.tb01770.x
The language of experiential learning
  • Jul 1, 1991
  • Journal of Advanced Nursing
  • Philip Burnard

  • Open Access Icon
  • Cite Count Icon 11
  • 10.1007/s10851-019-00916-w
A Proximal Interior Point Algorithm with Applications to Image Processing
  • Oct 19, 2019
  • Journal of Mathematical Imaging and Vision
  • Emilie Chouzenoux + 2 more

Similar Papers
  • Research Article
  • Cite Count Icon 15
  • 10.1007/s10957-013-0354-0
Derivative-Free Optimization Via Proximal Point Methods
  • Jun 27, 2013
  • Journal of Optimization Theory and Applications
  • W L Hare + 1 more

Derivative-Free Optimization (DFO) examines the challenge of minimizing (or maximizing) a function without explicit use of derivative information. Many standard techniques in DFO are based on using model functions to approximate the objective function, and then applying classic optimization methods to the model function. For example, the details behind adapting steepest descent, conjugate gradient, and quasi-Newton methods to DFO have been studied in this manner. In this paper we demonstrate that the proximal point method can also be adapted to DFO. To that end, we provide a derivative-free proximal point (DFPP) method and prove convergence of the method in a general sense. In particular, we give conditions under which the gradient values of the iterates converge to 0, and conditions under which an iterate corresponds to a stationary point of the objective function.

  • Conference Article
  • Cite Count Icon 13
  • 10.1109/ipdps.2014.121
Petascale General Solver for Semidefinite Programming Problems with Over Two Million Constraints
  • May 1, 2014
  • Katsuki Fujisawa + 6 more

The semi definite programming (SDP) problem is one of the central problems in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. However, two well-known major bottlenecks, i.e., the generation of the Schur complement matrix (SCM) and its Cholesky factorization, exist in the algorithmic framework of the PDIPM. We have developed a new version of the semi definite programming algorithm parallel version (SDPARA), which is a parallel implementation on multiple CPUs and GPUs for solving extremely large-scale SDP problems with over a million constraints. SDPARA can automatically extract the unique characteristics from an SDP problem and identify the bottleneck. When the generation of the SCM becomes a bottleneck, SDPARA can attain high scalability using a large quantity of CPU cores and some processor affinity and memory interleaving techniques. SDPARA can also perform parallel Cholesky factorization using thousands of GPUs and techniques for overlapping computation and communication if an SDP problem has over two million constraints and Cholesky factorization constitutes a bottleneck. We demonstrate that SDPARA is a high-performance general solver for SDPs in various application fields through numerical experiments conducted on the TSUBAME 2.5 supercomputer, and we solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.713 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.

  • Research Article
  • Cite Count Icon 68
  • 10.1137/18m1230323
Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
  • Jan 1, 2019
  • SIAM Journal on Optimization
  • Hilal Asi + 1 more

We develop model-based methods for solving stochastic convex optimization problems, introducing the approximate-proximal point, or aProx, family, which includes stochastic subgradient, proximal point, and bundle methods. When the modeling approaches we propose are appropriately accurate, the methods enjoy stronger convergence and robustness guarantees than classical approaches, even though the model-based methods typically add little to no computational overhead over stochastic subgradient methods. For example, we show that improved models converge with probability 1 and enjoy optimal asymptotic normality results under weak assumptions; these methods are also adaptive to a natural class of what we term easy optimization problems, achieving linear convergence under appropriate strong growth conditions on the objective. Our substantial experimental investigation shows the advantages of more accurate modeling over standard subgradient methods across many smooth and non-smooth optimization problems.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.ymssp.2020.106792
Modal dynamic residual-based model updating through regularized semidefinite programming with facial reduction
  • Apr 8, 2020
  • Mechanical Systems and Signal Processing
  • Dan Li + 1 more

Modal dynamic residual-based model updating through regularized semidefinite programming with facial reduction

  • Research Article
  • Cite Count Icon 15
  • 10.1081/nfa-120034119
Proximal Methods with Penalization Effects in Banach Spaces
  • Dec 31, 2004
  • Numerical Functional Analysis and Optimization
  • Rolando Gárciga Otero + 1 more

In this paper we introduce a boundary coercive condition on the regularizing function of the proximal point method in Banach spaces, which has a penalization effect and is useful for solving certain variational inequality problems. As its finite dimensional counterpart (proximal point method with Bregman distances) it avoids explicit consideration of the constrains, because the feasibility is taken care of by the regularizing function, whose derivative diverges on the boundary of the feasible set. The algorithm is therefore an interior point one. We show that this effect is guaranteed even for an inexact version of the method, namely the hybrid extragradient proximal point method proposed by Solodov and Svaiter for finite dimensional spaces. We give a full convergence analysis, and examples of regularizing functions satisfying the required conditions, for the cases of the feasible set being a closed ball or a polyhedron.

  • Book Chapter
  • 10.1007/978-4-431-55060-0_30
High Performance Computing for Mathematical Optimization Problem
  • Jan 1, 2014
  • Katsuki Fujisawa

The semidefinite programming (SDP) problem is one of the central problems in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. However, two well-known major bottlenecks, i.e., the generation of the Schur complement matrix (SCM) and its Cholesky factorization, exist in the algorithmic framework of the PDIPM. We have developed a new version of the semidefinite programming algorithm parallel version (SDPARA), which is a parallel implementation on multiple CPUs and GPUs for solving extremely large-scale SDP problems with over a million constraints. SDPARA can automatically extract the unique characteristics from an SDP problem and identify the bottleneck. When the generation of the SCM becomes a bottleneck, SDPARA can attain high scalability using a large quantity of CPU cores and some processor affinity and memory interleaving techniques. SDPARA can also perform parallel Cholesky factorization using thousands of GPUs and techniques for overlapping computation and communication if an SDP problem has over 2 million constraints and Cholesky factorization constitutes a bottleneck. We demonstrate that SDPARA is a high-performance general solver for SDPs in various application fields through numerical experiments conducted on the TSUBAME 2.5 supercomputer, and we solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.713 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs.

  • Research Article
  • Cite Count Icon 56
  • 10.1016/j.ijepes.2004.12.001
A trust region interior point algorithm for optimal power flow problems
  • Mar 7, 2005
  • International Journal of Electrical Power and Energy Systems
  • Wang Min + 1 more

A trust region interior point algorithm for optimal power flow problems

  • Research Article
  • Cite Count Icon 34
  • 10.1109/tnnls.2013.2275170
Efficient dual approach to distance metric learning.
  • Feb 1, 2014
  • IEEE transactions on neural networks and learning systems
  • Chunhua Shen + 4 more

Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.

  • Research Article
  • Cite Count Icon 20
  • 10.1080/02331934.2017.1349124
On the proximal point method in Hadamard spaces
  • Jul 11, 2017
  • Optimization
  • Parin Chaipunya + 1 more

Proximal point method is one of the most influential procedure in solving nonlinear variational problems. It has recently been introduced in Hadamard spaces for solving convex optimization, and later for variational inequalities. In this paper, we study the general proximal point method for finding a zero point of a maximal monotone set-valued vector field defined on a Hadamard space and valued in its dual. We also give the relation between the maximality and Minty’s surjectivity condition, which is essential for the proximal point method to be well-defined. By exploring the properties of monotonicity and the surjectivity condition, we were able to show under mild assumptions that the proximal point method converges weakly to a zero point. Additionally, by taking into account the metric subregularity, we obtained the local strong convergence in linear and super-linear rates.

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-3-319-42432-3_33
Advanced Computing and Optimization Infrastructure for Extremely Large-Scale Graphs on Post Peta-Scale Supercomputers
  • Jan 1, 2016
  • Katsuki Fujisawa + 2 more

In this talk, we present our ongoing research project. The objective of this project is to develop advanced computing and optimization infrastructures for extremely large-scale graphs on post peta-scale supercomputers. We explain our challenge to Graph 500 and Green Graph 500 benchmarks that are designed to measure the performance of a computer system for applications that require irregular memory and network access patterns. The 1st Graph500 list was released in November 2010. The Graph500 benchmark measures the performance of any supercomputer performing a BFS (Breadth-First Search) in terms of traversed edges per second (TEPS). In 2014 and 2015, our project team was a winner of the 8th, 10th, and 11th Graph500 and the 3rd to 6th Green Graph500 benchmarks, respectively. We also present our parallel implementation for large-scale SDP (SemiDefinite Programming) problem. The semidefinite programming (SDP) problem is a predominant problem in mathematical optimization. The primal-dual interior-point method (PDIPM) is one of the most powerful algorithms for solving SDP problems, and many research groups have employed it for developing software packages. We solved the largest SDP problem (which has over 2.33 million constraints), thereby creating a new world record. Our implementation also achieved 1.774 PFlops in double precision for large-scale Cholesky factorization using 2,720 CPUs and 4,080 GPUs on the TSUBAME 2.5 supercomputer.

  • Research Article
  • Cite Count Icon 5
  • 10.1051/ro:1999117
A generalized proximal point algorithm for the nonlinear complementarity problem
  • Oct 1, 1999
  • RAIRO - Operations Research
  • Regina S Burachik + 1 more

We consider a generalized proximal point method (GPPA) for solving the nonlinear complementarity problem with monotone operators in R' lt differs from the classical proximal point method discussed by Rockafellar for the problem offinding zeroes of monotone operators in the use of generalized distances, called (p-divergences, instead of the Euclidean one. These distances play not only a regularization wie but also a penalization one, forcing the sequence generaled by the method to remain in the interior of the feasible set, so that the method behaves like an interior point one. Under appropriate assumptions on the ip-divergence and the monotone operator we prove that the sequence converges if and only if the problem has solutions, in which case the limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend previous results for the proximal point method concerning convex optimization problems.

  • Research Article
  • Cite Count Icon 136
  • 10.1137/s1052623495286302
A Generalized Proximal Point Algorithm for the Variational Inequality Problem in a Hilbert Space
  • Feb 1, 1998
  • SIAM Journal on Optimization
  • Regina S Burachik + 1 more

We consider a generalized proximal point method for solving variational inequality problems with monotone operators in a Hilbert space. It differs from the classical proximal point method (as discussed by Rockafellar for the problem of finding zeroes of monotone operators) in the use of generalized distances, called Bregman distances, instead of the Euclidean one. These distances play not only a regularization role but also a penalization one, forcing the sequence generated by the method to remain in the interior of the feasible set so that the method becomes an interior point one. Under appropriate assumptions on the Bregman distance and the monotone operator we prove that the sequence converges (weakly) if and only if the problem has solutions, in which case the weak limit is a solution. If the problem does not have solutions, then the sequence is unbounded. We extend similar previous results for the proximal point method with Bregman distances which dealt only with the finite dimensional case and which applied only to convex optimization problems or to finding zeroes of monotone operators, which are particular cases of variational inequality problems.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/icpes.2011.6156662
Optimization of a power system with Interior Point method
  • Dec 1, 2011
  • B Venkateswara Rao + 3 more

Power flow or load flow solution is essential for continuous evaluation of the performance of the power systems so that suitable control measures can be taken in case of necessity. Load flow studies are made to plan the best operation and control of the existing system as well as plan the future expansion to keep pace with the load growth. Generally in any power system operation, our main aim is to operate a power system optimally. Optimality can be achieved by minimizing the cost, losses and maintaining voltage profile. So in order to achieve the above conditions that is to operate a system in an optimal way we choose various optimization techniques namely, optimal power flow by Newton method (OPF) and Interior Point (IP) method. In this paper a 5-BUS test system is taken and analyzed using the above mentioned two methods. It is shown that how the system power losses are decreased and voltage profiles are increased after using Interior Point (IP) method in this model. The results are generated for 5-Bus system.

  • Research Article
  • 10.61467/2007.1558.2024.v15i2.204
Solving Max-cut Problem with a mixed penalization method for Semidefinite Programming
  • Oct 1, 2024
  • International Journal of Combinatorial Optimization Problems and Informatics
  • Orkia Derkaoui + 2 more

This paper is an attempt to exploit the opportunity of Semi Definite Programming (SDP), which is an area of convex and conic optimization. Indeed, Numerous NP-hard problems can be solveby using this approach. Hence, we intend to investigate the strength of SDP to model and provide tight relaxations of combinatorial and quadratic problems in order to present a new polynomial time algorithm for solving this robust model. This algorithm was firstly use to solve the nonlinear programs, the reason for which we search to extend it to the SDP programs. Actually, this algorithm designs the combination of two penalization methods. The first one is a primal-dual interior point (PDIM) method while the second one is a primal-dual exterior point (PDEM) method. Unlike the first method, which converges globally, the second one, also called the primal dual nonlinear rescaling method, has local super linear/quadratic convergence. Therefore, it seems appropriate to use a mixed algorithm based on the interior-exterior point method (IEPM). In fact, this resolution starts from the interior method, and at a certain level of execution, it proceeds to exterior method. Hence, a convergence evaluation function is use to know the level of permutation. Through evaluation, it has been approve that our approach is use to solve some instances of max-cut problem. This problem is a central graph theory model that occurs in many real problems and it is one of many NP-hard problems, which has attracted many researchers over the years. Then, we have used a semi definite programming solver SDPA (Semi Definite Programming Algorithm) that is modify to include the exterior point method subroutine. From the computational performance, we conclude that as the problem size increases, interior-exterior point algorithm gets relatively faster. The numerical results obtained are promising.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 9
  • 10.1007/s10957-023-02194-4
Proximal Stabilized Interior Point Methods and Low-Frequency-Update Preconditioning Techniques
  • Apr 5, 2023
  • Journal of Optimization Theory and Applications
  • Stefano Cipolla + 1 more

In this work, in the context of Linear and convex Quadratic Programming, we consider Primal Dual Regularized Interior Point Methods (PDR-IPMs) in the framework of the Proximal Point Method. The resulting Proximal Stabilized IPM (PS-IPM) is strongly supported by theoretical results concerning convergence and the rate of convergence, and can handle degenerate problems. Moreover, in the second part of this work, we analyse the interactions between the regularization parameters and the computational footprint of the linear algebra routines used to solve the Newton linear systems. In particular, when these systems are solved using an iterative Krylov method, we are able to show—using a new rearrangement of the Schur complement which exploits regularization—that general purposes preconditioners remain attractive for a series of subsequent IPM iterations. Indeed, if on the one hand a series of theoretical results underpin the fact that the approach here presented allows a better re-use of such computed preconditioners, on the other, we show experimentally that such (re)computations are needed only in a fraction of the total IPM iterations. The resulting regularized second order methods, for which low-frequency-update of the preconditioners are allowed, pave the path for an alternative class of second order methods characterized by reduced computational effort.

More from: Computational Optimization and Applications
  • New
  • Addendum
  • 10.1007/s10589-025-00746-0
Correction to: From Halpern’s fixed-point iterations to Nesterov’s accelerated interpretations for root-finding problems
  • Oct 29, 2025
  • Computational Optimization and Applications
  • Quoc Tran-Dinh

  • Research Article
  • 10.1007/s10589-025-00733-5
Error control and Neyman–Pearson classification with buffered probability and support vectors
  • Oct 27, 2025
  • Computational Optimization and Applications
  • Matthew Norton + 2 more

  • Research Article
  • 10.1007/s10589-025-00731-7
A simple and effective exact method for the medianoid problem with multipurpose trips
  • Oct 4, 2025
  • Computational Optimization and Applications
  • Juan Pablo Fernández-Gutiérrez + 2 more

  • Research Article
  • 10.1007/s10589-025-00736-2
A shape optimization approach for a Batchelor flow problem
  • Oct 3, 2025
  • Computational Optimization and Applications
  • Henry Kasumba

  • Research Article
  • 10.1007/s10589-025-00738-0
Preface: global optimization algorithms and applications
  • Oct 3, 2025
  • Computational Optimization and Applications
  • Vladimir Boginski + 3 more

  • Research Article
  • 10.1007/s10589-025-00735-3
An overview and comparison of spectral bundle methods for primal and dual semidefinite programs
  • Oct 1, 2025
  • Computational Optimization and Applications
  • Feng-Yi Liao + 2 more

  • Research Article
  • 10.1007/s10589-025-00737-1
Convergence rates for an inexact linearized ADMM for nonsmooth nonconvex optimization with nonlinear equality constraints
  • Sep 29, 2025
  • Computational Optimization and Applications
  • Lahcen El Bourkhissi + 1 more

  • Research Article
  • 10.1007/s10589-025-00725-5
A heuristic algorithm for the cluster editing problem
  • Sep 18, 2025
  • Computational Optimization and Applications
  • Keisuke Murakami + 1 more

  • Research Article
  • 10.1007/s10589-025-00726-4
Formulations and algorithms for the simple cycle problem
  • Sep 6, 2025
  • Computational Optimization and Applications
  • Abilio Lucena + 2 more

  • Research Article
  • 10.1007/s10589-025-00715-7
Learning to accelerate tightening of convex relaxations of the AC optimal power flow problem
  • Aug 13, 2025
  • Computational Optimization and Applications
  • Fatih Cengil + 4 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon