Many techniques are used for solving large sparse linear systems. Some of the most popular methods for solving the more difficult problems combine direct and iterative algorithms to yield efficient multistage procedures. Although each stage is standard, the manner in which stages are combined is often novel. Both multigrid and preconditioned conjugate gradients fall into this class of numerical methods. In [lo], two promising algorithms were described with supportive numerical studies. The first was cyclic application of model-problem ADI iteration followed by a conjugate gradient iteration on the deviation of the model-problem from the actual problem. Some new developments relating to this procedure are described in a companion paper in this journal] 121. The second was a two-level iteration in which a variational correction with a coarse grid basis was applied cyclically to the result of conventional iteration over a fine grid. This was the first published implementation of a multigrid method based on the variational techniques that provide the foundations for finite element computation. In these initial two-level variational studies certain advantages of additive over multiplicative coarse mesh correction were not recognized. The coefficient matrix for additive correction does not depend on the fine mesh iterate and need thus be computed only once. Moreover, it need only be factored once for direct solution. The right-hand side of the correction equations varies each cycle. Brandt[4] and others have exploited this in multigrid and related methods. Some numerical comparisons of additive and multiplicative correction for two-level iteration are given in [ 111. Also, it was not recognized that under suitable conditions one may use the two-level iteration as a preconditioner for conjugate gradients, thus combining the two initially distinct methods as suggested subsequently in [1,7,9]. Some insight into the nature of such an iteration may be gained by a qualitative analysis of error behavior. The fine-mesh iteration acts primarily on high-mcde error components and the coarse-mesh correction reduces low-mode components. In some implementations the two iteration matrices commute but in most they do not. In the commuting case, careful choice of iteration parameters precludes benefits from the conjugate gradient third stage. In the noncommuting case, the conjugate gradient stage should act on error persisting because of component mixing between the other two stages. The automatic extrapolation in the conjugate gradient method greatly reduces convergence sensitivity to choice of iteration parameters for the fine-mesh iteration.