Abstract

The numerical treatment of many mathematical models (which arise, for example, in physics, chemistry, biology or economics) leads very often to huge algebraic problems, so that it is difficult (both with regard to the storage needed and with regard to the computing time spent) to handle these problems even on the big modern computers. However, the matrices that occur in the algebraic problems are fortunately sparse (many of their elements are equal to zero). The exploitation of the sparsity leads to savings in both storage and computer time, so that problems which can not be handled numerically when the zero elements are stored in the computer memory and when the arithmetic operations involving zero elements are performed become tractable if the sparsity is exploited in a proper way. There are two basic groups of storage schemes for exploiting the sparsity. If a scheme of the first group is in use, then the non-zero elements have permanent locations in the computer memory during the whole computational process. Therefore the schemes of the first group are called static. A non-zero element may be moved from one location to another when a scheme from the second group is applied. Such schemes are called dynamic. The advantages and the limitations of the schemes from these two groups are discussed. The advances achieved after 1980 in the efforts to improve the performance of the schemes belonging to both groups are given in a systematic way. Some questions that are still open are briefly discussed. The advances achieved in some other stages in the exploitation of the sparsity, which are not directly connected with the storage schemes used, are outlined in the last section.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call