Abstract

Many approaches have been proposed for solving systems of linear algebraic equations (s.l.a.e.1, which is a basic problem of computational linear algebra. The usual approach is as follows: a computational scheme is preassigned /l/, i.e., a method of solving the s.1.a.e. is fixed, thereby determining in advance (fairly strictly) the quality of the approximations obtained; then, if possible, the influence of errors present in the initial data, or arising during the computations, on the final result is estimated. This approach is quite adequate for a wide class of well-posed s.1.a.e. It is clear a priori (and is confirmed by careful analysis of the maximum accuracy attainable by the different methods) that most of the popular methods have virtually the same quality when the system matrix is well-posed, and only differ in minor details. With the extension of computing practice, we now need methods of solution stable to small variations of the matrix and right-hand side, which are also suitable for ill-posed problems. Most of the familiar methods are then very inefficient and lead to solutions which cannot be sensibly interpreted. It often happens that the computational process cannot in fact be finalized and breaks off with an emergency stop, no matter how carefully the programmer guards against this situation. The method of regularization has provided new scope for solving linear programming problems. The basic idea is to stabilize the problem by introducing a supplementary (regularization) parameter and choosing this with a view to compromising between a reasonably accurate approximation of the initial problem and the stability of the latter. It is not always easy to achieve this compromise in practice and special algorithms have to be used to chose the parameter. These algorithms are usually based on matching the accuracy of the solution of the problem with the accuracy of the input data, i.e., the matrix and right-hand side. A very popular method is to base the choice of regularization parameter on the discrepancy principle, i.e., match the discrepancy in the regularized solution with the error is pecifying the matrix and right-hand side. This method was justified in detail theoretically in /2, 3/, has been put in algorithmic form, and leads to stabilization of the approximate solution when the initial data contain errors. However the parameter is defined, the practical application of the method of regularization runs into difficulties. First, repeated solution of the regularized problemisusually needed. Second, and more important, the usual method of application is strictly linked with the specification of the root mean square errors of the initial data. Experience shows that, in practice, generally only pointwise estimates of these errors are known. While the usual approach can be used under these conditions, it is not always justified, since it involves under-utilization of the a priori information. Below, we develop basically simple but reliable ways of solving s.1.a.e. with disturbed initial data, using information about pointwise estimates of their errors.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.