Abstract

We develop an iterative algorithm to recover the minimum p-norm solution of the functional linear equation $$Ax = b,$$ where $$A: \mathcal {X}\longrightarrow \mathcal {Y}\,$$ is a continuous linear operator between the two Banach spaces $$\mathcal {X}= L^p$$ , $$1< p < 2$$ , and $$\mathcal {Y}= L^r$$ , $$r > 1$$ , with $$x \in \mathcal {X}$$ and $$b \in \mathcal {Y}$$ . The algorithm is conceived within the same framework of the Landweber method for functional linear equations in Banach spaces proposed by Schopfer et al. (Inverse Probl 22:311–329, 2006). Indeed, the algorithm is based on using, at the n-th iteration, a linear combination of the steepest current “descent functional” $$A^* J \left( b - A x_n \right) $$ and the previous descent functional, where J denotes a duality map of the Banach space $$\mathcal {Y}$$ . In this regard, the algorithm can be viewed as a generalization of the classical conjugate gradient method on the normal equations in Hilbert spaces. We demonstrate that the proposed iterative algorithm converges strongly to the minimum p-norm solution of the functional linear equation $$Ax = b$$ and that it is also a regularization method, by applying the discrepancy principle as stopping rule. According to the geometrical properties of $$L^p$$ spaces, numerical experiments show that the method is fast, robust in terms of both restoration accuracy and stability, promotes sparsity and reduces the over-smoothness in reconstructing edges and abrupt intensity changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call