Abstract

The purpose of this work is to construct iterative methods for solving a split minimization problem using a self-adaptive step size, conjugate gradient direction, and inertia technique. We introduce and prove a strong convergence theorem in the framework of Hilbert spaces. We then demonstrate numerically how the extrapolation factor (θn) in the inertia term and a step size parameter affect the performance of our proposed algorithm. Additionally, we apply our proposed algorithms to solve the signal recovery problem. Finally, we compared our algorithm’s recovery signal quality performance to that of three previously published works.

Highlights

  • Let C and Q be two closed convex subsets of two real Hilbert spaces H1 and H2, respectively, and denote the metric projection onto C by ProjC

  • The algorithms we constructed are combined with the following techniques: (1) self-adaptive step size technique to avoid computing the operator norm of a bounded linear operator, which is difficult if not impossible to calculate or even estimate; (2) inertia and conjugate gradient direction techniques to speed up the convergence rate

  • In the inertia term results in a faster rate of convergence and that the step size parameter affects the rate of convergence as well

Read more

Summary

Introduction

Let C and Q be two closed convex subsets of two real Hilbert spaces H1 and H2 , respectively, and denote the metric projection onto C by ProjC. Sakurai and Iiduka [21] investigated and introduced the iterative method to solve a fixed point of a nonexpansive mapping This method is based on the concept of conjugate gradient directions (6), which can be used to accelerate the steepest descent method, which generates the sequence { xn } as follows: d1 d n +1 yn x n +1. Kaewyong and Sitthithakerngkiet [32] introduced a self-adaptive step size algorithm for resolving a split minimization problem It is defined by the algorithm described below: un τn yn x n +1. The algorithms we constructed are combined with the following techniques: (1) self-adaptive step size technique to avoid computing the operator norm of a bounded linear operator, which is difficult if not impossible to calculate or even estimate; (2) inertia and conjugate gradient direction techniques to speed up the convergence rate. We compared the performance of our algorithm to that of three other strong convergence algorithms that had been published before our work

Preliminaries
Results
Numerical Examples
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call