Abstract

This paper presents a study on the convergence rate of two projection methods for solving the variational inequality problemh∈K, 〈C(h),f-h〉 ≥ 0, ∀f∈K, whereK is a closed convex subset of ℝn,C is a mapping fromK to ℝn and denotes the inner product in ℝn. The first method, proposed by Dafermos [6] for the case whenC is continuously differentiable and strongly monotone, generates a sequence{fi} inK which is geometrically convergent to the unique solutionh∈K of the variational inequality; i.e., there exists a constant λ≡]0,1[ such that for all i, ∥fi+1−h∥G≤λ ∥fi−h∥, whereG is a symmetric positive definite matrix and\(\left\| f \right\|_G = \left\langle {f,Gf} \right\rangle ^{\frac{1}{2}} \) ∀f≡ℝn. The second method, proposed by Bertsekas and Gafni [8] for the case whenK is polyhedral andC is of the formC=AiTA, whereA is anm×n matrix andT: ℝm→ℝm is Lipschitz continuous and strongly monotone, generates a sequence{fi} inK which converges to a solutionh≡K of the variational inequality and satisfies the following estimate: ∥fi+1−h∥G≤qβi, whereq>0 and β≡]0,1[. We examine the dependence of the constants λ and β on the parameters of the methods and establish that, except for particular cases, these constants do not assume those values which guarantee a rapid convergence of the methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call