Published in last 50 years
Articles published on Proximal Point Method
- New
- Research Article
- 10.1137/24m1713132
- Nov 5, 2025
- SIAM Journal on Optimization
- E L Buscaglia + 3 more
A General Framework for Symmetric and Asymmetric Variable Metric Proximal Point Methods, with Relations between Relative-Errors and Summable-Errors
- Research Article
- 10.1016/j.compeleceng.2025.110491
- Aug 1, 2025
- Computers and Electrical Engineering
- R.A.L Rabelo + 4 more
A non-monotone proximal point method for image reconstruction using non-convex total variation models
- Research Article
- 10.1007/s11075-025-02177-8
- Jul 23, 2025
- Numerical Algorithms
- Balendu Bhooshan Upadhyay + 4 more
An inexact proximal point method with quasi-distance for quasiconvex multiobjective optimization problems on Riemannian manifolds
- Research Article
- 10.3390/math13142282
- Jul 16, 2025
- Mathematics
- Alexander J Zaslavski
In the present paper, we use the proximal point method with remotest set control for find an approximate common zero of a finite collection of maximal monotone maps in a real Hilbert space under the presence of computational errors. We prove that the inexact proximal point method generates an approximate solution if these errors are summable. Also, we show that if the computational errors are small enough, then the inexact proximal point method generates approximate solutions
- Research Article
- 10.56824/vujs.2024a150a
- Mar 20, 2025
- Vinh University Journal of Science
- Nguyen Thi Thu
In this paper, we prove the finite convergence of sequences generated by (inexact and exact) proximal point methods for solving pseudomonotone equilibrium problems on Hadamard manifolds under the linear conditioning of the solution set. Keywords: Equilibrium problems; Hadamard manifolds; pseudomonotone bifunction; linear conditioning; finite convergence; proximal point method.
- Research Article
- 10.3390/axioms14020127
- Feb 10, 2025
- Axioms
- Behzad Djafari Rouhani + 1 more
We investigate the Δ-convergence and strong convergence of a sequence generated by the proximal point method for pseudo-monotone equilibrium problems in Hadamard spaces. First, we show the Δ-convergence of the generated sequence to a solution of the equilibrium problem. Next, we prove the strong convergence of the generated sequence with some additional conditions imposed on the bifunction. Finally, we prove the strong convergence of the generated sequence, by using Halpern’s regularization method, without any additional condition.
- Research Article
1
- 10.1007/s10107-024-02182-0
- Feb 4, 2025
- Mathematical Programming
- Brecht Evens + 3 more
Convergence of the preconditioned proximal point method and Douglas–Rachford splitting in the absence of monotonicity
- Research Article
- 10.1007/s10479-024-06461-z
- Jan 5, 2025
- Annals of Operations Research
- Balendu Bhooshan Upadhyay + 3 more
Inexact proximal point method with a Bregman regularization for quasiconvex multiobjective optimization problems via limiting subdifferentials
- Research Article
- 10.37394/23206.2024.23.109
- Dec 31, 2024
- WSEAS TRANSACTIONS ON MATHEMATICS
- Behzad Djafari Rouhani + 1 more
In this paper, we study the convergence analysis of the sequence generated by an inexact proximal point method with unbounded errors to find zeros of m-accretive operators in Banach spaces. We prove the zero set of the operator is nonempty if and only if the generated sequence is bounded. In this case, we show that the generated sequence converges strongly to a zero of the operator. This process defines a sunny nonexpansive retraction from the Banach space onto the zero set of the operator. We present also some applications and numerical experiments for our results.
- Research Article
- 10.1080/10556788.2024.2436173
- Dec 23, 2024
- Optimization Methods and Software
- Chunming Tang + 3 more
In this paper, an implementable descent method for nonsmooth multiobjective optimization problems on complete Riemannian manifolds is proposed. The objective functions are only assumed to be locally Lipschitz continuous instead of convexity used in the existing subgradient method for Riemannian multiobjective optimization. And the constraint manifold is a general manifold rather than some specific manifolds used in the proximal point method. The retraction mapping is introduced to avoid the use of computationally difficult geodesic. A Riemannian version of the necessary condition for Pareto optimality is proposed, which generalized the classical one in Euclidean space to the manifold setting. At every iteration, an acceptable descent direction is obtained by constructing a convex hull of some Riemannian ε-subgradients. And then a Riemannian Armijo-type line search is executed to produce the next iterate. The convergence result is established in the sense that a point satisfying the necessary condition for Pareto optimality can be generated by the algorithm in a finite number of iterations. Finally, some preliminary numerical results are reported, which show that the proposed method is efficient.
- Research Article
- 10.3390/math12233773
- Nov 29, 2024
- Mathematics
- Hammed Anuoluwapo Abass + 3 more
This paper explores the iterative approximation of solutions to equilibrium problems and proposes a simple proximal point method for addressing them. We incorporate the golden ratio technique as an extrapolation method, resulting in a two-step iterative process. This method is self-adaptive and does not require any Lipschitz-type conditions for implementation. We present and prove a weak convergence theorem along with a sublinear convergence rate for our method. The results extend some previously published findings from Hilbert spaces to 2-uniformly convex Banach spaces. To demonstrate the effectiveness of the method, we provide several numerical illustrations and compare the results with those from other methods available in the literature.
- Research Article
- 10.1088/1742-6596/2905/1/012020
- Nov 1, 2024
- Journal of Physics: Conference Series
- Siqi Zhang + 4 more
Abstract This paper proposes a proximal bundle algorithm based on the proximal point method for solving generalized variational inequalities with inexact data. First, the explanation of subgradient values and function inexactness is given. The algorithm’s fundamental steps are then presented. Ultimately, the algorithm’s convergence is confirmed under specific circumstances.
- Research Article
1
- 10.1007/s10589-024-00614-3
- Oct 14, 2024
- Computational Optimization and Applications
- Stefano Cipolla + 1 more
A regularized version of the primal-dual Interior Point Method (IPM) for the solution of Semidefinite Programming Problems (SDPs) is presented in this paper. Leveraging on the proximal point method, a novel Proximal Stabilized Interior Point Method for SDP (PS-SDP-IPM) is introduced. The method is strongly supported by theoretical results concerning its convergence: the worst-case complexity result is established for the inner regularized infeasible inexact IPM solver. The new method demonstrates an increased robustness when dealing with problems characterized by ill-conditioning or linear dependence of the constraints without requiring any kind of pre-processing. Extensive numerical experience is reported to illustrate advantages of the proposed method when compared to the state-of-the-art solver.
- Research Article
- 10.1080/02331934.2024.2404164
- Sep 20, 2024
- Optimization
- G C Bento + 2 more
ABSTRACT In this paper, we present an approximate proximal point method for addressing the variational inequality problem on Hadamard manifolds, and we analyse its convergence properties. The proposed algorithm exhibits inexactness in two aspects. Firstly, each proximal subproblem is approximated by utilizing the enlargement of the vector field under consideration, and subsequently, the next iteration is obtained by solving this subproblem while allowing for a suitable error tolerance. As an illustrative application, we develop an approximate proximal point method for nonlinear optimization problems on Hadamard manifolds.
- Research Article
- 10.1007/s12532-024-00258-8
- Aug 1, 2024
- Mathematical Programming Computation
- Alex Shtoff
Efficient algorithms for implementing incremental proximal-point methods
- Research Article
- 10.1007/s10957-024-02482-7
- Jun 27, 2024
- Journal of Optimization Theory and Applications
- Erik Alex Papa Quiroz
Proximal Point Method for Quasiconvex Functions in Riemannian Manifolds
- Research Article
- 10.1364/ol.524854
- Jun 17, 2024
- Optics letters
- Huanhuan Yu + 4 more
Solving the distorted wavefront in wavefront sensorless adaptive optics (WFSL-AO) relies on excellent optimizers. Many local or global optimization algorithms have been applied to WFSL-AO; however, there is still a challenge to balance the effect and speed of correcting aberrations. To overcome this, a novel global optimization algorithm named asymptotic proximal point (APP) method is introduced into WFSL-AO in this Letter. We compare this algorithm with the various existing optimization algorithms in convergence speed and correction capability by performing numerical simulations. The results show that the APP method beats all competitors with a better correction effect and faster speed.
- Research Article
2
- 10.1007/s11117-024-01057-0
- May 30, 2024
- Positivity
- Xiaopeng Zhao + 3 more
An inexact proximal point method with quasi-distance for quasi-convex multiobjective optimization
- Research Article
6
- 10.1080/10556788.2024.2322700
- Mar 26, 2024
- Optimization Methods and Software
- Pham Duy Khanh + 2 more
The paper proposes and develops a novel inexact gradient method (IGD) for minimizing C 1 -smooth functions with Lipschitzian gradients, i.e. for problems of C 1 , 1 optimization. We show that the sequence of gradients generated by IGD converges to zero. The convergence of iterates to stationary points is guaranteed under the Kurdyka-Łojasiewicz (KL) property of the objective function with convergence rates depending on the KL exponent. The newly developed IGD is applied to designing two novel gradient-based methods of nonsmooth convex optimization such as the inexact proximal point methods (GIPPM) and the inexact augmented Lagrangian method (GIALM) for convex programs with linear equality constraints. These two methods inherit global convergence properties from IGD and are confirmed by numerical experiments to have practical advantages over some well-known algorithms of nonsmooth convex optimization.
- Research Article
7
- 10.1609/aaai.v38i15.29605
- Mar 24, 2024
- Proceedings of the AAAI Conference on Artificial Intelligence
- Zhichen Zeng + 5 more
Finding node correspondence across networks, namely multi-network alignment, is an essential prerequisite for joint learning on multiple networks. Despite great success in aligning networks in pairs, the literature on multi-network alignment is sparse due to the exponentially growing solution space and lack of high-order discrepancy measures. To fill this gap, we propose a hierarchical multi-marginal optimal transport framework named HOT for multi-network alignment. To handle the large solution space, multiple networks are decomposed into smaller aligned clusters via the fused Gromov-Wasserstein (FGW) barycenter. To depict high-order relationships across multiple networks, the FGW distance is generalized to the multi-marginal setting, based on which networks can be aligned jointly. A fast proximal point method is further developed with guaranteed convergence to a local optimum. Extensive experiments and analysis show that our proposed HOT achieves significant improvements over the state-of-the-art in both effectiveness and scalability.