Asymptotic Regularity Results for Halpern-type Inexact Iterations for Coincidence Problems

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract In our recent work we computed linear rates of asymptotic regularity for a viscosity version of Halpern-type iterations for coincidence problem in a metric space with a hyperbolic structure. In the present paper we extend this result for inexact iterations under the presence of computational errors.

Similar Papers
  • Research Article
  • Cite Count Icon 23
  • 10.1137/090766930
Convergence of a Proximal Point Method in the Presence of Computational Errors in Hilbert Spaces
  • Jan 1, 2010
  • SIAM Journal on Optimization
  • Alexander J Zaslavski

We study the convergence of a proximal point method in a Hilbert space under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods when computational errors are summable. In the present paper the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution if the sequence of computational errors is bounded from above by some constant.

  • Research Article
  • Cite Count Icon 7
  • 10.1080/01630563.2012.706769
The Extragradient Method for Convex Optimization in the Presence of Computational Errors
  • Dec 1, 2012
  • Numerical Functional Analysis and Optimization
  • Alexander J Zaslavski

In this article, we study convergence of the extragradient method for constrained convex minimization problems in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In this article, the convergence of the extragradient method for solving convex minimization problems is established for nonsummable computational errors. We show that the the extragradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.

  • Research Article
  • Cite Count Icon 6
  • 10.1007/s10957-011-9975-3
The Extragradient Method for Solving Variational Inequalities in the Presence of Computational Errors
  • Dec 13, 2011
  • Journal of Optimization Theory and Applications
  • A J Zaslavski

In a Hilbert space, we study the convergence of the subgradient method to a solution of a variational inequality, under the presence of computational errors. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In the present paper, the convergence of the subgradient method for solving variational inequalities is established for nonsummable computational errors. We show that the subgradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.jmaa.2012.11.055
The extragradient method for finding a common solution of a finite family of variational inequalities and a finite family of fixed point problems in the presence of computational errors
  • Dec 7, 2012
  • Journal of Mathematical Analysis and Applications
  • Alexander J Zaslavski

The extragradient method for finding a common solution of a finite family of variational inequalities and a finite family of fixed point problems in the presence of computational errors

  • Research Article
  • Cite Count Icon 11
  • 10.1016/j.na.2012.06.015
A proximal point algorithm for finding a common zero of a finite family of maximal monotone operators in the presence of computational errors
  • Jul 7, 2012
  • Nonlinear Analysis: Theory, Methods & Applications
  • Alexander J Zaslavski

A proximal point algorithm for finding a common zero of a finite family of maximal monotone operators in the presence of computational errors

  • Research Article
  • Cite Count Icon 16
  • 10.1007/s10957-011-9820-8
Maximal Monotone Operators and the Proximal Point Algorithm in the Presence of Computational Errors
  • Mar 25, 2011
  • Journal of Optimization Theory and Applications
  • A J Zaslavski

In a finite-dimensional Euclidean space, we study the convergence of a proximal point method to a solution of the inclusion induced by a maximal monotone operator, under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In the present paper, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.

  • Conference Article
  • 10.1109/acssc.2014.7094772
Sequential prediction of individual sequences in the presence of computational errors
  • Nov 1, 2014
  • Mehmet A Donmez + 1 more

We study the performance of a sequential linear prediction system built on nanoscale beyond-CMOS circuit fabric that may introduce in computation. We propose a new sequential linear prediction algorithm under a mixture-of-experts framework that performs satisfactorily in the presence of computational errors. We introduce a worst-case approach to model the computational errors, where we view erroneous circuit fabric as an adversary that perturbs the prediction algorithm to heavily deteriorate its performance. We demonstrate that our algorithm achieves uniformly good performance under the worst-case error approach in an individual sequence manner.

  • Book Chapter
  • 10.1007/978-3-319-30921-7_16
Newton’s Method
  • Jan 1, 2016
  • Alexander J Zaslavski

In this chapter we study the convergence of Newton’s method for nonlinear equations and nonlinear inclusions in a Banach space. Nonlinear mappings, which appear in the right-hand side of the equations, are not necessarily differentiable. Our goal is to obtain an approximate solution in the presence of computational errors. In order to meet this goal, in the case of inclusions, we study the behavior of iterates of nonexpansive set-valued mappings in the presence of computational errors.

  • Research Article
  • Cite Count Icon 16
  • 10.1080/01630563.2010.489248
The Projected Subgradient Method for Nonsmooth Convex Optimization in the Presence of Computational Errors
  • Jul 13, 2010
  • Numerical Functional Analysis and Optimization
  • Alexander J Zaslavski

We study the convergence of the projected subgradient method for constrained convex optimization in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. The results that we obtain are important in practice because computations always introduce numerical errors.

  • Research Article
  • Cite Count Icon 3
  • 10.11650/twjm/1500406077
Convergence of a Proximal-like Algorithm in the Presence of Computational Errors
  • Dec 1, 2010
  • Taiwanese Journal of Mathematics
  • Alexander J Zaslavski

We study the convergence of a proximal-like minimization algorithm using Bregman functions. We extend the convergence results by Censor and Zenios (1992) and by Chen and Teboulle (1993) by showing that the convergence of the algorithm is preserved in the presence of computational errors.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.jat.2013.07.012
Subgradient projection algorithms for convex feasibility problems in the presence of computational errors
  • Aug 1, 2013
  • Journal of Approximation Theory
  • Alexander J Zaslavski

Subgradient projection algorithms for convex feasibility problems in the presence of computational errors

  • Book Chapter
  • 10.1007/978-3-030-60300-7_2
Nonsmooth Convex Optimization
  • Jan 1, 2020
  • Alexander J Zaslavski

In this chapter, we study an extension of the projected subgradient method for minimization of convex and nonsmooth functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points. For this algorithm, each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function, while in the second one, we calculate a projection on the feasible set. In each of these two steps, there is a computational error. In general, these two computational errors are different. In our recent research, we show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know computational errors for the two steps of our algorithm, we find out what an approximate solution can be obtained and how many iterates one needs for this. In this chapter, we generalize all these results for an extension of the projected subgradient method, when instead of the projection on the feasible set it is used a quasi-nonexpansive retraction on this set.

  • Book Chapter
  • 10.1007/978-3-319-33255-0_8
Proximal Point Algorithm
  • Jan 1, 2016
  • Alexander J Zaslavski

In a Hilbert space, we study the convergence of an iterative proximal point method to a common zero of a finite family of maximal monotone operators under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In this chapter, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.

  • Book Chapter
  • 10.1007/978-3-319-30921-7_14
Continuous Subgradient Method
  • Jan 1, 2016
  • Alexander J Zaslavski

In this chapter we study the continuous subgradient algorithm for minimization of convex functions, under the presence of computational errors. We show that our algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how much time one needs for this.

  • Research Article
  • 10.11650/twjm/1500406796
THE SUBGRADIENT METHOD FOR SOLVING VARIATIONAL INEQUALITIES WITH COMPUTATIONAL ERRORS IN A HILBERT SPACE
  • Sep 1, 2012
  • Taiwanese Journal of Mathematics
  • Alexander J Zaslavski

In a Hilbert space, we study the asymptotic behavior of the subgradient method for solving a variational inequality, under the presence of computational errors. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In the present paper, the convergence of the subgradient method to the solution of a variational inequalities is established for nonsummable computational errors. We show that the the subgradient method generates good approximate solutions, if the sequence of computational errors is bounded from above by a constant.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon