Asymptotic Regularity Results for Halpern-type Inexact Iterations for Coincidence Problems
Abstract In our recent work we computed linear rates of asymptotic regularity for a viscosity version of Halpern-type iterations for coincidence problem in a metric space with a hyperbolic structure. In the present paper we extend this result for inexact iterations under the presence of computational errors.
- Research Article
23
- 10.1137/090766930
- Jan 1, 2010
- SIAM Journal on Optimization
We study the convergence of a proximal point method in a Hilbert space under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods when computational errors are summable. In the present paper the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution if the sequence of computational errors is bounded from above by some constant.
- Research Article
7
- 10.1080/01630563.2012.706769
- Dec 1, 2012
- Numerical Functional Analysis and Optimization
In this article, we study convergence of the extragradient method for constrained convex minimization problems in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In this article, the convergence of the extragradient method for solving convex minimization problems is established for nonsummable computational errors. We show that the the extragradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.
- Research Article
6
- 10.1007/s10957-011-9975-3
- Dec 13, 2011
- Journal of Optimization Theory and Applications
In a Hilbert space, we study the convergence of the subgradient method to a solution of a variational inequality, under the presence of computational errors. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In the present paper, the convergence of the subgradient method for solving variational inequalities is established for nonsummable computational errors. We show that the subgradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.
- Research Article
7
- 10.1016/j.jmaa.2012.11.055
- Dec 7, 2012
- Journal of Mathematical Analysis and Applications
The extragradient method for finding a common solution of a finite family of variational inequalities and a finite family of fixed point problems in the presence of computational errors
- Research Article
11
- 10.1016/j.na.2012.06.015
- Jul 7, 2012
- Nonlinear Analysis: Theory, Methods & Applications
A proximal point algorithm for finding a common zero of a finite family of maximal monotone operators in the presence of computational errors
- Research Article
16
- 10.1007/s10957-011-9820-8
- Mar 25, 2011
- Journal of Optimization Theory and Applications
In a finite-dimensional Euclidean space, we study the convergence of a proximal point method to a solution of the inclusion induced by a maximal monotone operator, under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In the present paper, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.
- Conference Article
- 10.1109/acssc.2014.7094772
- Nov 1, 2014
We study the performance of a sequential linear prediction system built on nanoscale beyond-CMOS circuit fabric that may introduce in computation. We propose a new sequential linear prediction algorithm under a mixture-of-experts framework that performs satisfactorily in the presence of computational errors. We introduce a worst-case approach to model the computational errors, where we view erroneous circuit fabric as an adversary that perturbs the prediction algorithm to heavily deteriorate its performance. We demonstrate that our algorithm achieves uniformly good performance under the worst-case error approach in an individual sequence manner.
- Book Chapter
- 10.1007/978-3-319-30921-7_16
- Jan 1, 2016
In this chapter we study the convergence of Newton’s method for nonlinear equations and nonlinear inclusions in a Banach space. Nonlinear mappings, which appear in the right-hand side of the equations, are not necessarily differentiable. Our goal is to obtain an approximate solution in the presence of computational errors. In order to meet this goal, in the case of inclusions, we study the behavior of iterates of nonexpansive set-valued mappings in the presence of computational errors.
- Research Article
16
- 10.1080/01630563.2010.489248
- Jul 13, 2010
- Numerical Functional Analysis and Optimization
We study the convergence of the projected subgradient method for constrained convex optimization in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. The results that we obtain are important in practice because computations always introduce numerical errors.
- Research Article
3
- 10.11650/twjm/1500406077
- Dec 1, 2010
- Taiwanese Journal of Mathematics
We study the convergence of a proximal-like minimization algorithm using Bregman functions. We extend the convergence results by Censor and Zenios (1992) and by Chen and Teboulle (1993) by showing that the convergence of the algorithm is preserved in the presence of computational errors.
- Research Article
7
- 10.1016/j.jat.2013.07.012
- Aug 1, 2013
- Journal of Approximation Theory
Subgradient projection algorithms for convex feasibility problems in the presence of computational errors
- Book Chapter
- 10.1007/978-3-030-60300-7_2
- Jan 1, 2020
In this chapter, we study an extension of the projected subgradient method for minimization of convex and nonsmooth functions, under the presence of computational errors. The problem is described by an objective function and a set of feasible points. For this algorithm, each iteration consists of two steps. The first step is a calculation of a subgradient of the objective function, while in the second one, we calculate a projection on the feasible set. In each of these two steps, there is a computational error. In general, these two computational errors are different. In our recent research, we show that our algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if we know computational errors for the two steps of our algorithm, we find out what an approximate solution can be obtained and how many iterates one needs for this. In this chapter, we generalize all these results for an extension of the projected subgradient method, when instead of the projection on the feasible set it is used a quasi-nonexpansive retraction on this set.
- Book Chapter
- 10.1007/978-3-319-33255-0_8
- Jan 1, 2016
In a Hilbert space, we study the convergence of an iterative proximal point method to a common zero of a finite family of maximal monotone operators under the presence of computational errors. Most results known in the literature establish the convergence of proximal point methods, when computational errors are summable. In this chapter, the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.
- Book Chapter
- 10.1007/978-3-319-30921-7_14
- Jan 1, 2016
In this chapter we study the continuous subgradient algorithm for minimization of convex functions, under the presence of computational errors. We show that our algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how much time one needs for this.
- Research Article
- 10.11650/twjm/1500406796
- Sep 1, 2012
- Taiwanese Journal of Mathematics
In a Hilbert space, we study the asymptotic behavior of the subgradient method for solving a variational inequality, under the presence of computational errors. Most results known in the literature establish convergence of optimization algorithms, when computational errors are summable. In the present paper, the convergence of the subgradient method to the solution of a variational inequalities is established for nonsummable computational errors. We show that the the subgradient method generates good approximate solutions, if the sequence of computational errors is bounded from above by a constant.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.