Geometric Approach to Symmetric Positive Definite Linear Systems
This paper compares the performance of the conjugate gradient method and geometric approach in the case of symmetric positive definite (SPD) linear systems. This approach is based on the geometric theory of ODEs which was effectively initiated by Poncaré and Liapunov. The simplest and most obvious advantage of the geometric approach over the conjugate gradient method (the MATLAB code pcg) is that this approach can find the inverse of the underlying positive definite matrix and the solution. We present various numerical examples, which demonstrate the superiority of the geometric approach. For SPD linear systems, this approach provides much higher accuracy than the conjugate gradient method. In particular, since it is a one-stop procedure, it can avoid the growth of accumulated round-off errors to some extent.
- Book Chapter
- 10.1007/978-3-319-32149-3_10
- Jan 1, 2016
We present a multi-threaded solver for symmetric positive definite linear systems where the coefficient matrix of the problem features a bordered-band non-zero pattern. The algorithms that implement this approach heavily rely on a compact storage format, tailored for this type of matrices, that reduces the memory requirements, produces a regular data access pattern, and allows to cast the bulk of the computations in terms of efficient kernels from the Level-3 and Level-2 BLAS. The efficiency of our approach is illustrated by numerical experiments.
- Research Article
- 10.1080/10556788.2023.2189716
- Apr 21, 2023
- Optimization Methods and Software
The conjugate gradient (CG) method is a classic Krylov subspace method for solving symmetric positive definite linear systems. We analyze an analogous semi-conjugate gradient (SCG) method, a special case of the existing semi-conjugate direction (SCD) methods, for unsymmetric positive definite linear systems. Unlike CG, SCG requires the solution of a lower triangular linear system to produce each semi-conjugate direction. We prove that SCG is theoretically equivalent to the full orthogonalization method (FOM), which is based on the Arnoldi process and converges in a finite number of steps. Because SCG's triangular system increases in size each iteration, Dai and Yuan [Study on semi-conjugate direction methods for non-symmetric systems, Int. J. Numer. Meth. Eng. 60(8) (2004), pp. 1383–1399] proposed a sliding window implementation (SWI) to improve efficiency. We show that the directions produced are still locally semi-conjugate. A counter-example illustrates that SWI is different from the direct incomplete orthogonalization method (DIOM), which is FOM with a sliding window. Numerical experiments from the convection-diffusion equation and other applications show that SCG is robust and that the sliding window implementation SWI allows SCG to solve large systems efficiently.
- Preprint Article
- 10.13140/rg.2.2.19916.08327
- Jun 6, 2022
- arXiv (Cornell University)
The conjugate gradient (CG) method is a classic Krylov subspace method for solving symmetric positive definite linear systems. We introduce an analogous semi-conjugate gradient (SCG) method for unsymmetric positive definite linear systems. Unlike CG, SCG requires the solution of a lower triangular linear system to produce each semi-conjugate direction. We prove that SCG is theoretically equivalent to the full orthogonalization method (FOM), which is based on the Arnoldi process and converges in a finite number of steps. Because SCG's triangular system increases in size each iteration, we study a sliding window implementation (SWI) to improve efficiency, and show that the directions produced are still locally semi-conjugate. A counterexample illustrates that SWI is different from the direct incomplete orthogonalization method (DIOM), which is FOM with a sliding window. Numerical experiments from the convection-diffusion equation and other applications show that SCG is robust and that the sliding window implementation SWI allows SCG to solve large systems efficiently.
- Research Article
67
- 10.1137/s0895479897330194
- Jan 1, 2000
- SIAM Journal on Matrix Analysis and Applications
Many scientific applications require one to solve successively linear systems Ax=b with different right-hand sides b and a symmetric positive definite matrix A. The conjugate gradient method applied to the first system generates a Krylov subspace which can be efficiently recycled thanks to orthogonal projections in subsequent systems. A modified conjugate gradient method is then applied with a specific initial guess and initial descent direction and a modified descent direction during the iterations. This paper gives new theoretical results for this method and proposes a new version. Numerical experiments show the efficacy of our method even for quite different right-hand sides.
- Research Article
5
- 10.1007/s11227-017-2020-z
- Mar 28, 2017
- The Journal of Supercomputing
We propose an approach to estimate the power consumption of algorithms, as a function of the frequency and number of cores, using only a very reduced set of real power measures. In addition, we also provide the formulation of a method to select the voltage–frequency scaling–concurrency throttling configurations that should be tested in order to obtain accurate estimations of the power dissipation. The power models and selection methodology are verified using two real scientific application: the stencil-based 3D MPDATA algorithm and the conjugate gradient (CG) method for sparse linear systems. MPDATA is a crucial component of the EULAG model, which is widely used in weather forecast simulations. The CG algorithm is the keystone for iterative solution of sparse symmetric positive definite linear systems via Krylov subspace methods. The reliability of the method is confirmed for a variety of ARM and Intel architectures, where the estimated results correspond to the real measured values with the average error being slightly below 5% in all cases.
- Conference Article
2
- 10.1109/icacte.2010.5578996
- Aug 1, 2010
This paper discusses a class of two-stage iterations whose outer iterative methods are the block SOR methods for parallel solution of linear systems. Convergence is showed for symmetric positive definite linear systems, and an approximate optimal relaxation factor is defined for block tridiagonal matrices. The numerical results for Poisson model problem are presented.
- Research Article
- 10.1016/j.egypro.2011.12.245
- Jan 1, 2011
- Energy Procedia
Some results on convergence of two-stage iterative methods for symmetric positive definite linear systems
- Research Article
4
- 10.1007/s10915-019-00969-4
- May 4, 2019
- Journal of Scientific Computing
The deflated block conjugate gradient (D-BCG) method is an attractive approach for the solution of symmetric positive definite linear systems with multiple right-hand sides. However, the orthogonality between the block residual vectors and the deflation subspace is gradually lost along with the process of the underlying algorithm implementation, which usually causes the algorithm to be unstable or possibly have delayed convergence. In order to maintain such orthogonality to keep certain level, full reorthogonalization could be employed as a remedy, but the expense required is quite costly. In this paper, we present a new projected variant of the deflated block conjugate gradient (PD-BCG) method to mitigate the loss of this orthogonality, which is helpful to deal with the delay of convergence and thus further achieve the theoretically faster convergence rate of D-BCG. Meanwhile, the proposed PD-BCG method is shown to scarcely have any extra computational cost, while having the same theoretical properties as D-BCG in exact arithmetic. Additionally, an automated reorthogonalization strategy is introduced as an alternative choice for the PD-BCG method. Numerical experiments demonstrate that PD-BCG is more efficient than its counterparts especially when solving ill-conditioned linear systems or linear systems suffering from rank deficiency.
- Research Article
5
- 10.1007/bf02142490
- Dec 1, 1996
- Numerical Algorithms
A hybrid iterative scheme that combines the Conjugate Gradient (CG) method with Richardson iteration is presented. This scheme is designed for the solution of linear systems of equations with a large sparse symmetric positive definite matrix. The purpose of the CG iterations is to improve an available approximate solution, as well as to determine an interval that contains all, or at least most, of the eigenvalues of the matrix. This interval is used to compute iteration parameters for Richardson iteration. The attraction of the hybrid scheme is that most of the iterations are carried out by the Richardson method, the simplicity of which makes efficient implementation on modern computers possible. Moreover, the hybrid scheme yields, at no additional computational cost, accurate estimates of the extreme eigenvalues of the matrix. Knowledge of these eigenvalues is essential in some applications.
- Research Article
1
- 10.1080/00207160802275951
- Apr 1, 2010
- International Journal of Computer Mathematics
This paper concerns the solutions of very large symmetric semipositive definite (singular) linear systems involved in the problem of optimal surface parameterizations using inverse curvature mapping. Two approaches are presented that transform the singular linear systems into two kinds of symmetric positive definite linear systems, so that the famous Conjugate Gradient (CG) method can be used for solving them. Numerical experiments are run on two practical large problems to illustrate that the CG algorithm works very efficiently.
- Research Article
1
- 10.1016/j.amc.2013.10.042
- Nov 20, 2013
- Applied Mathematics and Computation
Backward error analysis of Choleski Q.I.F. for the solution of symmetric positive definite linear systems
- Research Article
2
- 10.1016/j.amc.2009.12.033
- Dec 22, 2009
- Applied Mathematics and Computation
Generalizations of the nonstationary multisplitting iterative method for symmetric positive definite linear systems
- Research Article
28
- 10.1137/19m1298263
- Jan 1, 2021
- SIAM Journal on Scientific Computing
Exploiting Lower Precision Arithmetic in Solving Symmetric Positive Definite Linear Systems and Least Squares Problems
- Research Article
2
- 10.1023/a:1019181417702
- May 1, 1997
- Numerical Algorithms
In this paper, we address the problem of solving sparse symmetric linear systems on parallel computers. With further restrictive assumptions on the matrix (e.g., bidiagonal or tridiagonal structure), several direct methods may be used. These methods give ideas for constructing efficient data parallel preconditioners for general positive definite symmetric matrices. We describe two examples of such preconditioners for which the factorization (i.e., the construction of the preconditioning matrix) turns out to be parallel.
- Research Article
11
- 10.1016/j.parco.2017.12.005
- Jan 31, 2018
- Parallel Computing
A scalable iterative dense linear system solver for multiple right-hand sides in data analytics
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.