Symplectic discretization approach for developing new proximal point algorithm

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Symplectic discretization approach for developing new proximal point algorithm

Similar Papers
  • Research Article
  • Cite Count Icon 4
  • 10.1007/s10589-019-00091-z
On relaxation of some customized proximal point algorithms for convex minimization: from variational inequality perspective
  • Apr 2, 2019
  • Computational Optimization and Applications
  • Feng Ma

The proximal point algorithm (PPA) is a fundamental method for convex programming. When applying the PPA to solve linearly constrained convex problems, we may prefer to choose an appropriate metric matrix to define the proximal regularization, so that the computational burden of the resulted PPA can be reduced, and sometimes even admit closed form or efficient solutions. This idea results in the so-called customized PPA (also known as preconditioned PPA), and it covers the linearized ALM, the primal-dual hybrid gradient algorithm, ADMM as special cases. Since each customized PPA owes special structures and has popular applications, it is interesting to ask wether we can design a simple relaxation strategy for these algorithms. In this paper we treat these customized PPA algorithms uniformly by a mixed variational inequality approach, and propose a new relaxation strategy for these customized PPA algorithms. Our idea is based on correcting the dual variables individually and does not rely on relaxing the primal variables. This is very different from previous works. From variational inequality perspective, we prove the global convergence and establish a worst-case convergence rate for these relaxed PPA algorithms. Finally, we demonstrate the performance improvements by some numerical results.

  • Research Article
  • Cite Count Icon 9
  • 10.1007/s10589-018-9992-3
The generalized proximal point algorithm with step size 2 is not necessarily convergent
  • Mar 3, 2018
  • Computational Optimization and Applications
  • Min Tao + 1 more

The proximal point algorithm (PPA) is a fundamental method in optimization and it has been well studied in the literature. Recently a generalized version of the PPA with a step size in (0, 2) has been proposed. Inheriting all important theoretical properties of the original PPA, the generalized PPA has some numerical advantages that have been well verified in the literature by various applications. A common sense is that larger step sizes are preferred whenever the convergence can be theoretically ensured; thus it is interesting to know whether or not the step size of the generalized PPA can be as large as 2. We give a negative answer to this question. Some counterexamples are constructed to illustrate the divergence of the generalized PPA with step size 2 in both generic and specific settings, including the generalized versions of the very popular augmented Lagrangian method and the alternating direction method of multipliers. A by-product of our analysis is the failure of convergence of the Peaceman–Rachford splitting method and a generalized version of the forward–backward splitting method with step size 1.5.

  • PDF Download Icon
  • Research Article
  • 10.1155/2009/957407
Super-Relaxed ()-Proximal Point Algorithms, Relaxed ()-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions
  • Jan 1, 2009
  • Fixed Point Theory and Applications
  • Ravi P Agarwal + 1 more

We glance at recent advances to the general theory of maximal (set-valued) monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( )-proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( )-monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976), while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976) to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992). Even for the linear convergence analysis for the overrelaxed (or super-relaxed) ( )-proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( )-monotonicity, and then applying to first-order evolution equations/inclusions.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/02331934.2024.2345761
New fast proximal point algorithms for monotone inclusion problems with applications to image recovery
  • May 1, 2024
  • Optimization
  • Lateef O Jolaoso + 2 more

The proximal point algorithm has many applications for convex optimization with several versions of the proximal point algorithm including generalized proximal point algorithms and accelerated proximal point algorithms that have been studied in the literature. In this paper, we propose accelerated versions of generalized proximal point algorithms to find a zero of a maximal monotone operator in Hilbert spaces. We give both weak and linear convergence results of our proposed algorithms under standard conditions. Numerical applications of our results to image recovery are given and numerical implementations show that our algorithms are effective and superior to other related accelerated proximal point algorithms in the literature.

  • Research Article
  • Cite Count Icon 5
  • 10.1080/02331934.2020.1751158
The indefinite proximal point algorithms for maximal monotone operators
  • Apr 15, 2020
  • Optimization
  • Fan Jiang + 2 more

The proximal point algorithm (PPA) has been widely used in convex optimization. Many algorithms fall into the framework of PPA. To guarantee the convergence of PPA, however, existing results conventionally need to ensure the positive definiteness of the corresponding proximal measure. In some senses, this essentially results in tiny step sizes (or over regularization) for the subproblems and thus inevitably decelerates the overall convergence speed of PPA. In this paper, we investigate the possibility of relaxing the positive definiteness requirement of the proximal measure in PPA. An indefinite PPA for finding a zero of maximal monotone operator is thus proposed via choosing an indefinite proximal regularization term, resulting in larger step sizes. Under some suitable conditions, we prove the global convergence of the proposed algorithm and its extension. To make our method more practical, we suggest to solve the subproblem in an approximate manner and propose two flexible inexact criteria. We show that the condition which guarantees the convergence of the proposed indefinite PPA is tight by a simple example. In addition, we show how to apply the indefinite PPA to some convex models. We report some preliminary numerical results, which show the efficiencies of the proposed algorithms.

  • Research Article
  • Cite Count Icon 10
  • 10.1007/bf00253806
Modified proximal point algorithm for extended linear-quadratic programming
  • Nov 1, 1992
  • Computational Optimization and Applications
  • Ciyou Zhu

Extended linear-quadratic programming arises as a flexible modeling scheme in dynamic and stochastic optimization, which allows for penalty terms and facilitates the use of duality. Computationally it raises new challenges as well as new possibilities in large-scale applications. Recent efforts have been focused on the fully quadratic case ([15] and [23]), while relying on the fundamental proximal point algorithm (PPA) as a shell of “outer” iterations when the problem is not fully quadratic. In this paper, we focus on the nonfully quadratic cases by proposing some new variants of the fundamental PPA. We first construct a continuously differentiable saddle function S(u, v) through infimal convolution in such a way that the optimal primal-dual pairs of the original problem are just the saddle points of S(u, v) on the whole space. Then the original extended linear-quadratic-programming problem reduces to solving the nonlinear equation ∇S(u, v)=0. We then embed the fundamental PPA and some of its previous variants in the framework of a Newton-like iteration for this equation. After revealing the local quadratic structure of S near the solution, we derive new extensions of the fundamental PPA. In numerical tests, the modified iteration scheme based on the quasi-Newton update formula outperforms the fundamental PPA considerably.

  • Research Article
  • Cite Count Icon 1
  • 10.3934/jimo.2013.9.153
Proximal point algorithm for nonlinear complementarity problem based on the generalized Fischer-Burmeister merit function
  • Jan 1, 2013
  • Journal of Industrial & Management Optimization
  • Yu-Lin Chang + 2 more

This paper is devoted to the study of the proximal point algorithm for solving monotone and nonmonotone nonlinear complementarity problems. The proximal point algorithm is to generate a sequence by solving subproblems that are regularizations of the original problem. After given an appropriate criterion for approximate solutions of subproblems by adopting a merit function, the proximal point algorithm is verified to have global and superlinear convergence properties. For the purpose of solving the subproblems efficiently, we introduce a generalized Newton method and show that only one Newton step is eventually needed to obtain a desired approximate solution that approximately satisfies the appropriate criterion under mild conditions. The motivations of this paper are twofold. One is analyzing the proximal point algorithm based on the generalized Fischer-Burmeister function which includes the Fischer-Burmeister function as special case, another one is trying to see if there are relativistic change on numerical performance when we adjust the parameter in the generalized Fischer-Burmeister.

  • Research Article
  • Cite Count Icon 5
  • 10.1080/01630568708816257
How good are the proximal point algorithms?
  • Jan 1, 1987
  • Numerical Functional Analysis and Optimization
  • A.A Goldstein + 1 more

Proximal point algorithms are applicable to a variety of settings in optimization. See Rockafellar, R.T. (1976), and Spingarn, J.E. (1981) for examples. We consider a simple idealized proximal point algorithm using gradient minimization on C2 convex functions. This is compared to the direct use of the same gradient method with an appropriate mollifier. The comparison is made by determining estimates of the costrequired to reduce the function to a given precision E. Our object is to assess the potential efficiency of these algorithms even if we do not know how to realize this potential. We find that for distant starting values, proximal point algorithms are considerably less laborious than a direct method. However there is no essential improvement in the complexity - only in the numerical factors. This negative conclusion holds for the entire family of proximal point algorithms based on the gradient methods of this paper. The algorithms considered may be important for large scale optimization problems. In ...

  • Book Chapter
  • Cite Count Icon 4
  • 10.1007/978-1-4757-3279-5_17
The Proximal Point Algorithm for the P 0 Complementarity Problem
  • Jan 1, 2001
  • Nobuo Yamashita + 2 more

In this paper we consider a proximal point algorithm (PPA) for solving the nonlinear complementarity problem (NCP) with a P 0 function. PPA was originally proposed by Martinet and further developed by Rockafellar for monotone variational inequalities and monotone operator problems. PPA is known to have nice convergence properties under mild conditions. However, until now, it has been applied mainly to monotone problems. In this paper, we propose a PPA for the NCP involving a P 0 function and establish its global convergence under appropriate conditions by using the Mountain Pass Theorem. Moreover, we give conditions under which it has a superlinear rate of convergence.

  • Research Article
  • Cite Count Icon 27
  • 10.11650/twjm/1500405125
PROXIMAL POINT ALGORITHMS AND FOUR RESOLVENTS OF NONLINEAR OPERATORS OF MONOTONE TYPE IN BANACH SPACES
  • Nov 1, 2008
  • Taiwanese Journal of Mathematics
  • Wataru Takahashi

In this article, motivated by Rockafellar’s proximal point algorithm in Hilbert spaces, we discuss various weak and strong convergence theorems for resolvents of accretive operators and maximal monotone operators which are connected with the proximal point algorithm. We first deal with proximal point algorithms in Hilbert spaces. Then, we consider weak and strong convergence theorems for resolvents of accretive operators in Banach spaces which generalize the results in Hilbert spaces. Further, we deal with weak and strong convergence theorems for three types of resolvents of maximal monotone operators in Banach spaces which are related to proximal point algorithms. Finally, in Section 7, we apply some results obtained in Banach spaces to the problem of finding minimizers of convex functions in Banach spaces.

  • Research Article
  • Cite Count Icon 11
  • 10.1007/s10957-016-1028-5
Strong Convergence of Two Proximal Point Algorithms with Possible Unbounded Error Sequences
  • Oct 19, 2016
  • Journal of Optimization Theory and Applications
  • Behzad Djafari Rouhani + 1 more

We consider a proximal point algorithm with errors for a maximal monotone operator in a real Hilbert space, previously studied by Boikanyo and Morosanu, where they assumed that the zero set of the operator is nonempty and the error sequence is bounded. In this paper, by using our own approach, we significantly improve the previous results by giving a necessary and sufficient condition for the zero set of the operator to be nonempty, and by showing that in this case, this iterative sequence converges strongly to the metric projection of some point onto the zero set of the operator, without assuming the boundedness of the error sequence. We study also in a similar way the strong convergence of a new proximal point algorithm and present some applications of our results to optimization and variational inequalities.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/01630563.2022.2109171
A New Boosted Proximal Point Algorithm for Minimizing Nonsmooth DC Functions
  • Aug 4, 2022
  • Numerical Functional Analysis and Optimization
  • Amir Hamzeh Alizadeh Tabrizian + 1 more

Several optimization schemes have been known for convex optimization problems. A significant progress to go beyond convexity was made by considering the class of functions representable as difference of convex functions which constitute the backbone of nonconvex programming and global optimization. In this article, we introduce new algorithm to minimize the difference of a continuously differentiable function and a convex function that accelerate the convergence of the classical proximal point algorithm. We prove that the point computed by proximal point algorithm can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of proximal point algorithm together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analyzed under the strong Kurdyka–Łojasiewicz property of the objective function.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/02331934.2024.2325552
The degenerate variable metric proximal point algorithm and adaptive stepsizes for primal–dual Douglas–Rachford
  • Mar 6, 2024
  • Optimization
  • Dirk A Lorenz + 2 more

In this paper, the degenerate preconditioned proximal point algorithm will be combined with the idea of varying preconditioners leading to the degenerate variable metric proximal point algorithm. The weak convergence of the resulting iteration will be proven. From the perspective of the degenerate variable metric proximal point algorithm, a version of the primal–dual Douglas–Rachford method with varying preconditioners will be derived and a proof of its weak convergence which is based on the previous results for the proximal point algorithm, is provided, too. After that, we derive a heuristic on how to choose those varying preconditioners in order to increase the convergence speed of the method.

  • Research Article
  • Cite Count Icon 47
  • 10.1137/0328029
A Generalization of the Proximal Point Algorithm
  • Mar 1, 1990
  • SIAM Journal on Control and Optimization
  • Cu D Ha

The problem considered in this paper is to find a solution to the generalized equation $0 \in T(x,y)$, where T is a maximal monotone operator on the product $H_1 \times H_2 $ of two Hilbert spaces $H_1 $ and $H_2 $. We give a generalization of the proximal map and the proximal point algorithm in which the proposed iterative procedure is based on just one variable. Applying to convex programming problems, instead of adding a quadratic term for all variables as in the proximal point algorithm, a quadratic term for a subset of variables is added. This paper proves that under a mild assumption our algorithm has the same convergence properties as the regular proximal point algorithm.

  • Conference Article
  • 10.1109/cdc.1987.272504
A generalization of the proximal point algorithm
  • Dec 1, 1987
  • Cu D Ha

The problem that we consider in this paper is to find a solution to the generalized equation 0 ? T(x,y), where T is a maximal monotone operator on the product H1 × H2 of two Hilbert spaces H1 and H2. We give a generalization of the proximal map and the proximal point algorithm in which the proposed iterative procedure is based on just one variable. Applying to convex programming problems, instead of adding a quadratic term for all variables as in the proximal point algorithm, we add a quadratic term for a subset of variables. We prove that under a mild assumption our algorithm has the same convergence properties as the regular proximal point algorithm.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.